00:00:00.001 Started by upstream project "autotest-spdk-v24.01-LTS-vs-dpdk-v23.11" build number 1018 00:00:00.001 originally caused by: 00:00:00.001 Started by upstream project "nightly-trigger" build number 3685 00:00:00.001 originally caused by: 00:00:00.001 Started by timer 00:00:00.051 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvmf-tcp-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-vg.groovy 00:00:00.051 The recommended git tool is: git 00:00:00.052 using credential 00000000-0000-0000-0000-000000000002 00:00:00.054 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvmf-tcp-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.074 Fetching changes from the remote Git repository 00:00:00.077 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.106 Using shallow fetch with depth 1 00:00:00.106 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.106 > git --version # timeout=10 00:00:00.134 > git --version # 'git version 2.39.2' 00:00:00.134 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.166 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.166 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:03.406 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:03.419 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:03.450 Checking out Revision db4637e8b949f278f369ec13f70585206ccd9507 (FETCH_HEAD) 00:00:03.450 > git config core.sparsecheckout # timeout=10 00:00:03.462 > git read-tree -mu HEAD # timeout=10 00:00:03.478 > git checkout -f db4637e8b949f278f369ec13f70585206ccd9507 # timeout=5 00:00:03.499 Commit message: "jenkins/jjb-config: Add missing SPDK_TEST_NVME_INTERRUPT flag" 00:00:03.500 > git rev-list --no-walk db4637e8b949f278f369ec13f70585206ccd9507 # timeout=10 00:00:03.631 [Pipeline] Start of Pipeline 00:00:03.642 [Pipeline] library 00:00:03.643 Loading library shm_lib@master 00:00:03.643 Library shm_lib@master is cached. Copying from home. 00:00:03.658 [Pipeline] node 00:00:03.670 Running on VM-host-SM0 in /var/jenkins/workspace/nvmf-tcp-vg-autotest 00:00:03.672 [Pipeline] { 00:00:03.681 [Pipeline] catchError 00:00:03.682 [Pipeline] { 00:00:03.697 [Pipeline] wrap 00:00:03.705 [Pipeline] { 00:00:03.713 [Pipeline] stage 00:00:03.715 [Pipeline] { (Prologue) 00:00:03.729 [Pipeline] echo 00:00:03.730 Node: VM-host-SM0 00:00:03.736 [Pipeline] cleanWs 00:00:03.746 [WS-CLEANUP] Deleting project workspace... 00:00:03.746 [WS-CLEANUP] Deferred wipeout is used... 00:00:03.752 [WS-CLEANUP] done 00:00:04.053 [Pipeline] setCustomBuildProperty 00:00:04.162 [Pipeline] httpRequest 00:00:04.487 [Pipeline] echo 00:00:04.489 Sorcerer 10.211.164.20 is alive 00:00:04.501 [Pipeline] retry 00:00:04.504 [Pipeline] { 00:00:04.518 [Pipeline] httpRequest 00:00:04.523 HttpMethod: GET 00:00:04.523 URL: http://10.211.164.20/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:04.524 Sending request to url: http://10.211.164.20/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:04.530 Response Code: HTTP/1.1 200 OK 00:00:04.531 Success: Status code 200 is in the accepted range: 200,404 00:00:04.531 Saving response body to /var/jenkins/workspace/nvmf-tcp-vg-autotest/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:05.655 [Pipeline] } 00:00:05.666 [Pipeline] // retry 00:00:05.673 [Pipeline] sh 00:00:05.955 + tar --no-same-owner -xf jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:05.971 [Pipeline] httpRequest 00:00:06.832 [Pipeline] echo 00:00:06.833 Sorcerer 10.211.164.20 is alive 00:00:06.839 [Pipeline] retry 00:00:06.840 [Pipeline] { 00:00:06.849 [Pipeline] httpRequest 00:00:06.852 HttpMethod: GET 00:00:06.852 URL: http://10.211.164.20/packages/spdk_c13c99a5eba3bff912124706e0ae1d70defef44d.tar.gz 00:00:06.852 Sending request to url: http://10.211.164.20/packages/spdk_c13c99a5eba3bff912124706e0ae1d70defef44d.tar.gz 00:00:06.866 Response Code: HTTP/1.1 200 OK 00:00:06.867 Success: Status code 200 is in the accepted range: 200,404 00:00:06.867 Saving response body to /var/jenkins/workspace/nvmf-tcp-vg-autotest/spdk_c13c99a5eba3bff912124706e0ae1d70defef44d.tar.gz 00:01:21.561 [Pipeline] } 00:01:21.580 [Pipeline] // retry 00:01:21.588 [Pipeline] sh 00:01:21.875 + tar --no-same-owner -xf spdk_c13c99a5eba3bff912124706e0ae1d70defef44d.tar.gz 00:01:24.424 [Pipeline] sh 00:01:24.706 + git -C spdk log --oneline -n5 00:01:24.706 c13c99a5e test: Various fixes for Fedora40 00:01:24.706 726a04d70 test/nvmf: adjust timeout for bigger nvmes 00:01:24.706 61c96acfb dpdk: Point dpdk submodule at a latest fix from spdk-23.11 00:01:24.706 7db6dcdb8 nvme/fio_plugin: update the way ruhs descriptors are fetched 00:01:24.706 ff6f5c41e nvme/fio_plugin: trim add support for multiple ranges 00:01:24.727 [Pipeline] withCredentials 00:01:24.739 > git --version # timeout=10 00:01:24.753 > git --version # 'git version 2.39.2' 00:01:24.769 Masking supported pattern matches of $GIT_PASSWORD or $GIT_ASKPASS 00:01:24.772 [Pipeline] { 00:01:24.781 [Pipeline] retry 00:01:24.784 [Pipeline] { 00:01:24.800 [Pipeline] sh 00:01:25.082 + git ls-remote http://dpdk.org/git/dpdk-stable v23.11 00:01:25.094 [Pipeline] } 00:01:25.116 [Pipeline] // retry 00:01:25.122 [Pipeline] } 00:01:25.139 [Pipeline] // withCredentials 00:01:25.150 [Pipeline] httpRequest 00:01:25.629 [Pipeline] echo 00:01:25.631 Sorcerer 10.211.164.20 is alive 00:01:25.642 [Pipeline] retry 00:01:25.644 [Pipeline] { 00:01:25.660 [Pipeline] httpRequest 00:01:25.665 HttpMethod: GET 00:01:25.665 URL: http://10.211.164.20/packages/dpdk_d15625009dced269fcec27fc81dd74fd58d54cdb.tar.gz 00:01:25.666 Sending request to url: http://10.211.164.20/packages/dpdk_d15625009dced269fcec27fc81dd74fd58d54cdb.tar.gz 00:01:25.667 Response Code: HTTP/1.1 200 OK 00:01:25.668 Success: Status code 200 is in the accepted range: 200,404 00:01:25.668 Saving response body to /var/jenkins/workspace/nvmf-tcp-vg-autotest/dpdk_d15625009dced269fcec27fc81dd74fd58d54cdb.tar.gz 00:01:29.782 [Pipeline] } 00:01:29.800 [Pipeline] // retry 00:01:29.808 [Pipeline] sh 00:01:30.091 + tar --no-same-owner -xf dpdk_d15625009dced269fcec27fc81dd74fd58d54cdb.tar.gz 00:01:31.477 [Pipeline] sh 00:01:31.758 + git -C dpdk log --oneline -n5 00:01:31.758 eeb0605f11 version: 23.11.0 00:01:31.758 238778122a doc: update release notes for 23.11 00:01:31.758 46aa6b3cfc doc: fix description of RSS features 00:01:31.758 dd88f51a57 devtools: forbid DPDK API in cnxk base driver 00:01:31.758 7e421ae345 devtools: support skipping forbid rule check 00:01:31.776 [Pipeline] writeFile 00:01:31.790 [Pipeline] sh 00:01:32.088 + jbp/jenkins/jjb-config/jobs/scripts/autorun_quirks.sh 00:01:32.116 [Pipeline] sh 00:01:32.389 + cat autorun-spdk.conf 00:01:32.389 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:32.389 SPDK_TEST_NVMF=1 00:01:32.389 SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:32.389 SPDK_TEST_USDT=1 00:01:32.389 SPDK_RUN_UBSAN=1 00:01:32.389 SPDK_TEST_NVMF_MDNS=1 00:01:32.389 NET_TYPE=virt 00:01:32.389 SPDK_JSONRPC_GO_CLIENT=1 00:01:32.389 SPDK_TEST_NATIVE_DPDK=v23.11 00:01:32.389 SPDK_RUN_EXTERNAL_DPDK=/home/vagrant/spdk_repo/dpdk/build 00:01:32.389 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:32.394 RUN_NIGHTLY=1 00:01:32.396 [Pipeline] } 00:01:32.408 [Pipeline] // stage 00:01:32.422 [Pipeline] stage 00:01:32.424 [Pipeline] { (Run VM) 00:01:32.435 [Pipeline] sh 00:01:32.714 + jbp/jenkins/jjb-config/jobs/scripts/prepare_nvme.sh 00:01:32.714 + echo 'Start stage prepare_nvme.sh' 00:01:32.714 Start stage prepare_nvme.sh 00:01:32.714 + [[ -n 4 ]] 00:01:32.714 + disk_prefix=ex4 00:01:32.714 + [[ -n /var/jenkins/workspace/nvmf-tcp-vg-autotest ]] 00:01:32.714 + [[ -e /var/jenkins/workspace/nvmf-tcp-vg-autotest/autorun-spdk.conf ]] 00:01:32.714 + source /var/jenkins/workspace/nvmf-tcp-vg-autotest/autorun-spdk.conf 00:01:32.714 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:32.714 ++ SPDK_TEST_NVMF=1 00:01:32.714 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:32.714 ++ SPDK_TEST_USDT=1 00:01:32.714 ++ SPDK_RUN_UBSAN=1 00:01:32.714 ++ SPDK_TEST_NVMF_MDNS=1 00:01:32.714 ++ NET_TYPE=virt 00:01:32.714 ++ SPDK_JSONRPC_GO_CLIENT=1 00:01:32.714 ++ SPDK_TEST_NATIVE_DPDK=v23.11 00:01:32.714 ++ SPDK_RUN_EXTERNAL_DPDK=/home/vagrant/spdk_repo/dpdk/build 00:01:32.714 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:32.714 ++ RUN_NIGHTLY=1 00:01:32.714 + cd /var/jenkins/workspace/nvmf-tcp-vg-autotest 00:01:32.714 + nvme_files=() 00:01:32.714 + declare -A nvme_files 00:01:32.714 + backend_dir=/var/lib/libvirt/images/backends 00:01:32.714 + nvme_files['nvme.img']=5G 00:01:32.714 + nvme_files['nvme-cmb.img']=5G 00:01:32.714 + nvme_files['nvme-multi0.img']=4G 00:01:32.714 + nvme_files['nvme-multi1.img']=4G 00:01:32.714 + nvme_files['nvme-multi2.img']=4G 00:01:32.714 + nvme_files['nvme-openstack.img']=8G 00:01:32.714 + nvme_files['nvme-zns.img']=5G 00:01:32.714 + (( SPDK_TEST_NVME_PMR == 1 )) 00:01:32.714 + (( SPDK_TEST_FTL == 1 )) 00:01:32.714 + (( SPDK_TEST_NVME_FDP == 1 )) 00:01:32.714 + [[ ! -d /var/lib/libvirt/images/backends ]] 00:01:32.714 + for nvme in "${!nvme_files[@]}" 00:01:32.714 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex4-nvme-multi2.img -s 4G 00:01:32.714 Formatting '/var/lib/libvirt/images/backends/ex4-nvme-multi2.img', fmt=raw size=4294967296 preallocation=falloc 00:01:32.714 + for nvme in "${!nvme_files[@]}" 00:01:32.714 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex4-nvme-cmb.img -s 5G 00:01:32.714 Formatting '/var/lib/libvirt/images/backends/ex4-nvme-cmb.img', fmt=raw size=5368709120 preallocation=falloc 00:01:32.714 + for nvme in "${!nvme_files[@]}" 00:01:32.714 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex4-nvme-openstack.img -s 8G 00:01:32.714 Formatting '/var/lib/libvirt/images/backends/ex4-nvme-openstack.img', fmt=raw size=8589934592 preallocation=falloc 00:01:32.714 + for nvme in "${!nvme_files[@]}" 00:01:32.714 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex4-nvme-zns.img -s 5G 00:01:32.714 Formatting '/var/lib/libvirt/images/backends/ex4-nvme-zns.img', fmt=raw size=5368709120 preallocation=falloc 00:01:32.714 + for nvme in "${!nvme_files[@]}" 00:01:32.714 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex4-nvme-multi1.img -s 4G 00:01:32.714 Formatting '/var/lib/libvirt/images/backends/ex4-nvme-multi1.img', fmt=raw size=4294967296 preallocation=falloc 00:01:32.714 + for nvme in "${!nvme_files[@]}" 00:01:32.714 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex4-nvme-multi0.img -s 4G 00:01:32.973 Formatting '/var/lib/libvirt/images/backends/ex4-nvme-multi0.img', fmt=raw size=4294967296 preallocation=falloc 00:01:32.973 + for nvme in "${!nvme_files[@]}" 00:01:32.973 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex4-nvme.img -s 5G 00:01:32.973 Formatting '/var/lib/libvirt/images/backends/ex4-nvme.img', fmt=raw size=5368709120 preallocation=falloc 00:01:32.973 ++ sudo grep -rl ex4-nvme.img /etc/libvirt/qemu 00:01:32.973 + echo 'End stage prepare_nvme.sh' 00:01:32.973 End stage prepare_nvme.sh 00:01:32.984 [Pipeline] sh 00:01:33.292 + DISTRO=fedora39 CPUS=10 RAM=12288 jbp/jenkins/jjb-config/jobs/scripts/vagrant_create_vm.sh 00:01:33.292 Setup: -n 10 -s 12288 -x http://proxy-dmz.intel.com:911 -p libvirt --qemu-emulator=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 --nic-model=e1000 -b /var/lib/libvirt/images/backends/ex4-nvme.img -b /var/lib/libvirt/images/backends/ex4-nvme-multi0.img,nvme,/var/lib/libvirt/images/backends/ex4-nvme-multi1.img:/var/lib/libvirt/images/backends/ex4-nvme-multi2.img -H -a -v -f fedora39 00:01:33.292 00:01:33.292 DIR=/var/jenkins/workspace/nvmf-tcp-vg-autotest/spdk/scripts/vagrant 00:01:33.292 SPDK_DIR=/var/jenkins/workspace/nvmf-tcp-vg-autotest/spdk 00:01:33.292 VAGRANT_TARGET=/var/jenkins/workspace/nvmf-tcp-vg-autotest 00:01:33.292 HELP=0 00:01:33.292 DRY_RUN=0 00:01:33.292 NVME_FILE=/var/lib/libvirt/images/backends/ex4-nvme.img,/var/lib/libvirt/images/backends/ex4-nvme-multi0.img, 00:01:33.292 NVME_DISKS_TYPE=nvme,nvme, 00:01:33.292 NVME_AUTO_CREATE=0 00:01:33.292 NVME_DISKS_NAMESPACES=,/var/lib/libvirt/images/backends/ex4-nvme-multi1.img:/var/lib/libvirt/images/backends/ex4-nvme-multi2.img, 00:01:33.292 NVME_CMB=,, 00:01:33.292 NVME_PMR=,, 00:01:33.292 NVME_ZNS=,, 00:01:33.292 NVME_MS=,, 00:01:33.292 NVME_FDP=,, 00:01:33.292 SPDK_VAGRANT_DISTRO=fedora39 00:01:33.292 SPDK_VAGRANT_VMCPU=10 00:01:33.292 SPDK_VAGRANT_VMRAM=12288 00:01:33.292 SPDK_VAGRANT_PROVIDER=libvirt 00:01:33.292 SPDK_VAGRANT_HTTP_PROXY=http://proxy-dmz.intel.com:911 00:01:33.292 SPDK_QEMU_EMULATOR=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 00:01:33.292 SPDK_OPENSTACK_NETWORK=0 00:01:33.292 VAGRANT_PACKAGE_BOX=0 00:01:33.292 VAGRANTFILE=/var/jenkins/workspace/nvmf-tcp-vg-autotest/spdk/scripts/vagrant/Vagrantfile 00:01:33.292 FORCE_DISTRO=true 00:01:33.292 VAGRANT_BOX_VERSION= 00:01:33.292 EXTRA_VAGRANTFILES= 00:01:33.292 NIC_MODEL=e1000 00:01:33.292 00:01:33.292 mkdir: created directory '/var/jenkins/workspace/nvmf-tcp-vg-autotest/fedora39-libvirt' 00:01:33.292 /var/jenkins/workspace/nvmf-tcp-vg-autotest/fedora39-libvirt /var/jenkins/workspace/nvmf-tcp-vg-autotest 00:01:36.576 Bringing machine 'default' up with 'libvirt' provider... 00:01:36.837 ==> default: Creating image (snapshot of base box volume). 00:01:36.837 ==> default: Creating domain with the following settings... 00:01:36.837 ==> default: -- Name: fedora39-39-1.5-1721788873-2326_default_1733064129_4994227a6258e258c614 00:01:36.837 ==> default: -- Domain type: kvm 00:01:36.837 ==> default: -- Cpus: 10 00:01:36.837 ==> default: -- Feature: acpi 00:01:36.837 ==> default: -- Feature: apic 00:01:36.837 ==> default: -- Feature: pae 00:01:36.837 ==> default: -- Memory: 12288M 00:01:36.837 ==> default: -- Memory Backing: hugepages: 00:01:36.837 ==> default: -- Management MAC: 00:01:36.837 ==> default: -- Loader: 00:01:36.837 ==> default: -- Nvram: 00:01:36.837 ==> default: -- Base box: spdk/fedora39 00:01:36.837 ==> default: -- Storage pool: default 00:01:36.837 ==> default: -- Image: /var/lib/libvirt/images/fedora39-39-1.5-1721788873-2326_default_1733064129_4994227a6258e258c614.img (20G) 00:01:36.837 ==> default: -- Volume Cache: default 00:01:36.837 ==> default: -- Kernel: 00:01:36.837 ==> default: -- Initrd: 00:01:36.837 ==> default: -- Graphics Type: vnc 00:01:36.837 ==> default: -- Graphics Port: -1 00:01:36.837 ==> default: -- Graphics IP: 127.0.0.1 00:01:36.837 ==> default: -- Graphics Password: Not defined 00:01:36.837 ==> default: -- Video Type: cirrus 00:01:36.837 ==> default: -- Video VRAM: 9216 00:01:36.837 ==> default: -- Sound Type: 00:01:36.838 ==> default: -- Keymap: en-us 00:01:36.838 ==> default: -- TPM Path: 00:01:36.838 ==> default: -- INPUT: type=mouse, bus=ps2 00:01:36.838 ==> default: -- Command line args: 00:01:36.838 ==> default: -> value=-device, 00:01:36.838 ==> default: -> value=nvme,id=nvme-0,serial=12340, 00:01:36.838 ==> default: -> value=-drive, 00:01:36.838 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex4-nvme.img,if=none,id=nvme-0-drive0, 00:01:36.838 ==> default: -> value=-device, 00:01:36.838 ==> default: -> value=nvme-ns,drive=nvme-0-drive0,bus=nvme-0,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:36.838 ==> default: -> value=-device, 00:01:36.838 ==> default: -> value=nvme,id=nvme-1,serial=12341, 00:01:36.838 ==> default: -> value=-drive, 00:01:36.838 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex4-nvme-multi0.img,if=none,id=nvme-1-drive0, 00:01:36.838 ==> default: -> value=-device, 00:01:36.838 ==> default: -> value=nvme-ns,drive=nvme-1-drive0,bus=nvme-1,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:36.838 ==> default: -> value=-drive, 00:01:36.838 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex4-nvme-multi1.img,if=none,id=nvme-1-drive1, 00:01:36.838 ==> default: -> value=-device, 00:01:36.838 ==> default: -> value=nvme-ns,drive=nvme-1-drive1,bus=nvme-1,nsid=2,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:36.838 ==> default: -> value=-drive, 00:01:36.838 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex4-nvme-multi2.img,if=none,id=nvme-1-drive2, 00:01:36.838 ==> default: -> value=-device, 00:01:36.838 ==> default: -> value=nvme-ns,drive=nvme-1-drive2,bus=nvme-1,nsid=3,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:37.097 ==> default: Creating shared folders metadata... 00:01:37.097 ==> default: Starting domain. 00:01:39.003 ==> default: Waiting for domain to get an IP address... 00:01:57.085 ==> default: Waiting for SSH to become available... 00:01:57.085 ==> default: Configuring and enabling network interfaces... 00:02:00.371 default: SSH address: 192.168.121.20:22 00:02:00.371 default: SSH username: vagrant 00:02:00.371 default: SSH auth method: private key 00:02:02.275 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/nvmf-tcp-vg-autotest/spdk/ => /home/vagrant/spdk_repo/spdk 00:02:10.428 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/nvmf-tcp-vg-autotest/dpdk/ => /home/vagrant/spdk_repo/dpdk 00:02:15.792 ==> default: Mounting SSHFS shared folder... 00:02:17.691 ==> default: Mounting folder via SSHFS: /mnt/jenkins_nvme/jenkins/workspace/nvmf-tcp-vg-autotest/fedora39-libvirt/output => /home/vagrant/spdk_repo/output 00:02:17.691 ==> default: Checking Mount.. 00:02:19.067 ==> default: Folder Successfully Mounted! 00:02:19.067 ==> default: Running provisioner: file... 00:02:20.004 default: ~/.gitconfig => .gitconfig 00:02:20.572 00:02:20.572 SUCCESS! 00:02:20.572 00:02:20.572 cd to /var/jenkins/workspace/nvmf-tcp-vg-autotest/fedora39-libvirt and type "vagrant ssh" to use. 00:02:20.572 Use vagrant "suspend" and vagrant "resume" to stop and start. 00:02:20.572 Use vagrant "destroy" followed by "rm -rf /var/jenkins/workspace/nvmf-tcp-vg-autotest/fedora39-libvirt" to destroy all trace of vm. 00:02:20.572 00:02:20.581 [Pipeline] } 00:02:20.596 [Pipeline] // stage 00:02:20.608 [Pipeline] dir 00:02:20.608 Running in /var/jenkins/workspace/nvmf-tcp-vg-autotest/fedora39-libvirt 00:02:20.610 [Pipeline] { 00:02:20.624 [Pipeline] catchError 00:02:20.626 [Pipeline] { 00:02:20.639 [Pipeline] sh 00:02:20.919 + vagrant ssh-config --host vagrant 00:02:20.919 + sed -ne /^Host/,$p 00:02:20.919 + tee ssh_conf 00:02:23.450 Host vagrant 00:02:23.450 HostName 192.168.121.20 00:02:23.450 User vagrant 00:02:23.450 Port 22 00:02:23.450 UserKnownHostsFile /dev/null 00:02:23.450 StrictHostKeyChecking no 00:02:23.450 PasswordAuthentication no 00:02:23.450 IdentityFile /var/lib/libvirt/images/.vagrant.d/boxes/spdk-VAGRANTSLASH-fedora39/39-1.5-1721788873-2326/libvirt/fedora39 00:02:23.450 IdentitiesOnly yes 00:02:23.450 LogLevel FATAL 00:02:23.450 ForwardAgent yes 00:02:23.450 ForwardX11 yes 00:02:23.450 00:02:23.464 [Pipeline] withEnv 00:02:23.466 [Pipeline] { 00:02:23.480 [Pipeline] sh 00:02:23.760 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant #!/bin/bash 00:02:23.760 source /etc/os-release 00:02:23.760 [[ -e /image.version ]] && img=$(< /image.version) 00:02:23.760 # Minimal, systemd-like check. 00:02:23.760 if [[ -e /.dockerenv ]]; then 00:02:23.760 # Clear garbage from the node's name: 00:02:23.760 # agt-er_autotest_547-896 -> autotest_547-896 00:02:23.760 # $HOSTNAME is the actual container id 00:02:23.760 agent=$HOSTNAME@${DOCKER_SWARM_PLUGIN_JENKINS_AGENT_NAME#*_} 00:02:23.760 if grep -q "/etc/hostname" /proc/self/mountinfo; then 00:02:23.760 # We can assume this is a mount from a host where container is running, 00:02:23.760 # so fetch its hostname to easily identify the target swarm worker. 00:02:23.760 container="$(< /etc/hostname) ($agent)" 00:02:23.760 else 00:02:23.760 # Fallback 00:02:23.760 container=$agent 00:02:23.760 fi 00:02:23.760 fi 00:02:23.760 echo "${NAME} ${VERSION_ID}|$(uname -r)|${img:-N/A}|${container:-N/A}" 00:02:23.760 00:02:24.030 [Pipeline] } 00:02:24.045 [Pipeline] // withEnv 00:02:24.054 [Pipeline] setCustomBuildProperty 00:02:24.070 [Pipeline] stage 00:02:24.073 [Pipeline] { (Tests) 00:02:24.090 [Pipeline] sh 00:02:24.370 + scp -F ssh_conf -r /var/jenkins/workspace/nvmf-tcp-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh vagrant@vagrant:./ 00:02:24.645 [Pipeline] sh 00:02:24.926 + scp -F ssh_conf -r /var/jenkins/workspace/nvmf-tcp-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/pkgdep-autoruner.sh vagrant@vagrant:./ 00:02:25.200 [Pipeline] timeout 00:02:25.200 Timeout set to expire in 1 hr 0 min 00:02:25.202 [Pipeline] { 00:02:25.216 [Pipeline] sh 00:02:25.496 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant git -C spdk_repo/spdk reset --hard 00:02:26.064 HEAD is now at c13c99a5e test: Various fixes for Fedora40 00:02:26.075 [Pipeline] sh 00:02:26.357 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant sudo chown vagrant:vagrant spdk_repo 00:02:26.630 [Pipeline] sh 00:02:26.911 + scp -F ssh_conf -r /var/jenkins/workspace/nvmf-tcp-vg-autotest/autorun-spdk.conf vagrant@vagrant:spdk_repo 00:02:27.186 [Pipeline] sh 00:02:27.467 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant JOB_BASE_NAME=nvmf-tcp-vg-autotest ./autoruner.sh spdk_repo 00:02:27.726 ++ readlink -f spdk_repo 00:02:27.726 + DIR_ROOT=/home/vagrant/spdk_repo 00:02:27.726 + [[ -n /home/vagrant/spdk_repo ]] 00:02:27.726 + DIR_SPDK=/home/vagrant/spdk_repo/spdk 00:02:27.726 + DIR_OUTPUT=/home/vagrant/spdk_repo/output 00:02:27.726 + [[ -d /home/vagrant/spdk_repo/spdk ]] 00:02:27.726 + [[ ! -d /home/vagrant/spdk_repo/output ]] 00:02:27.726 + [[ -d /home/vagrant/spdk_repo/output ]] 00:02:27.726 + [[ nvmf-tcp-vg-autotest == pkgdep-* ]] 00:02:27.726 + cd /home/vagrant/spdk_repo 00:02:27.726 + source /etc/os-release 00:02:27.726 ++ NAME='Fedora Linux' 00:02:27.726 ++ VERSION='39 (Cloud Edition)' 00:02:27.726 ++ ID=fedora 00:02:27.726 ++ VERSION_ID=39 00:02:27.726 ++ VERSION_CODENAME= 00:02:27.726 ++ PLATFORM_ID=platform:f39 00:02:27.726 ++ PRETTY_NAME='Fedora Linux 39 (Cloud Edition)' 00:02:27.726 ++ ANSI_COLOR='0;38;2;60;110;180' 00:02:27.726 ++ LOGO=fedora-logo-icon 00:02:27.726 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:39 00:02:27.726 ++ HOME_URL=https://fedoraproject.org/ 00:02:27.726 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f39/system-administrators-guide/ 00:02:27.726 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:02:27.726 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:02:27.726 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:02:27.726 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=39 00:02:27.726 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:02:27.726 ++ REDHAT_SUPPORT_PRODUCT_VERSION=39 00:02:27.726 ++ SUPPORT_END=2024-11-12 00:02:27.726 ++ VARIANT='Cloud Edition' 00:02:27.726 ++ VARIANT_ID=cloud 00:02:27.726 + uname -a 00:02:27.726 Linux fedora39-cloud-1721788873-2326 6.8.9-200.fc39.x86_64 #1 SMP PREEMPT_DYNAMIC Wed Jul 24 03:04:40 UTC 2024 x86_64 GNU/Linux 00:02:27.726 + sudo /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:02:27.726 Hugepages 00:02:27.726 node hugesize free / total 00:02:27.726 node0 1048576kB 0 / 0 00:02:27.726 node0 2048kB 0 / 0 00:02:27.726 00:02:27.726 Type BDF Vendor Device NUMA Driver Device Block devices 00:02:27.726 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:02:27.985 NVMe 0000:00:06.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:02:27.985 NVMe 0000:00:07.0 1b36 0010 unknown nvme nvme1 nvme1n1 nvme1n2 nvme1n3 00:02:27.985 + rm -f /tmp/spdk-ld-path 00:02:27.985 + source autorun-spdk.conf 00:02:27.985 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:02:27.985 ++ SPDK_TEST_NVMF=1 00:02:27.985 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:02:27.985 ++ SPDK_TEST_USDT=1 00:02:27.985 ++ SPDK_RUN_UBSAN=1 00:02:27.985 ++ SPDK_TEST_NVMF_MDNS=1 00:02:27.985 ++ NET_TYPE=virt 00:02:27.985 ++ SPDK_JSONRPC_GO_CLIENT=1 00:02:27.985 ++ SPDK_TEST_NATIVE_DPDK=v23.11 00:02:27.985 ++ SPDK_RUN_EXTERNAL_DPDK=/home/vagrant/spdk_repo/dpdk/build 00:02:27.985 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:02:27.985 ++ RUN_NIGHTLY=1 00:02:27.985 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:02:27.985 + [[ -n '' ]] 00:02:27.985 + sudo git config --global --add safe.directory /home/vagrant/spdk_repo/spdk 00:02:27.985 + for M in /var/spdk/build-*-manifest.txt 00:02:27.985 + [[ -f /var/spdk/build-kernel-manifest.txt ]] 00:02:27.985 + cp /var/spdk/build-kernel-manifest.txt /home/vagrant/spdk_repo/output/ 00:02:27.985 + for M in /var/spdk/build-*-manifest.txt 00:02:27.985 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:02:27.985 + cp /var/spdk/build-pkg-manifest.txt /home/vagrant/spdk_repo/output/ 00:02:27.985 + for M in /var/spdk/build-*-manifest.txt 00:02:27.985 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:02:27.985 + cp /var/spdk/build-repo-manifest.txt /home/vagrant/spdk_repo/output/ 00:02:27.985 ++ uname 00:02:27.985 + [[ Linux == \L\i\n\u\x ]] 00:02:27.985 + sudo dmesg -T 00:02:27.985 + sudo dmesg --clear 00:02:27.985 + dmesg_pid=5963 00:02:27.985 + [[ Fedora Linux == FreeBSD ]] 00:02:27.985 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:02:27.985 + sudo dmesg -Tw 00:02:27.985 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:02:27.985 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:02:27.985 + [[ -x /usr/src/fio-static/fio ]] 00:02:27.985 + export FIO_BIN=/usr/src/fio-static/fio 00:02:27.985 + FIO_BIN=/usr/src/fio-static/fio 00:02:27.985 + [[ '' == \/\q\e\m\u\_\v\f\i\o\/* ]] 00:02:27.985 + [[ ! -v VFIO_QEMU_BIN ]] 00:02:27.985 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:02:27.985 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:02:27.985 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:02:27.985 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:02:27.985 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:02:27.985 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:02:27.985 + spdk/autorun.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:02:27.985 Test configuration: 00:02:27.985 SPDK_RUN_FUNCTIONAL_TEST=1 00:02:27.985 SPDK_TEST_NVMF=1 00:02:27.985 SPDK_TEST_NVMF_TRANSPORT=tcp 00:02:27.985 SPDK_TEST_USDT=1 00:02:27.985 SPDK_RUN_UBSAN=1 00:02:27.985 SPDK_TEST_NVMF_MDNS=1 00:02:27.985 NET_TYPE=virt 00:02:27.985 SPDK_JSONRPC_GO_CLIENT=1 00:02:27.985 SPDK_TEST_NATIVE_DPDK=v23.11 00:02:27.985 SPDK_RUN_EXTERNAL_DPDK=/home/vagrant/spdk_repo/dpdk/build 00:02:27.985 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:02:27.985 RUN_NIGHTLY=1 14:43:01 -- common/autotest_common.sh@1689 -- $ [[ n == y ]] 00:02:27.985 14:43:01 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:02:28.244 14:43:01 -- scripts/common.sh@433 -- $ [[ -e /bin/wpdk_common.sh ]] 00:02:28.244 14:43:01 -- scripts/common.sh@441 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:02:28.244 14:43:01 -- scripts/common.sh@442 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:02:28.244 14:43:01 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:28.244 14:43:01 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:28.244 14:43:01 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:28.244 14:43:01 -- paths/export.sh@5 -- $ export PATH 00:02:28.244 14:43:01 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:28.244 14:43:01 -- common/autobuild_common.sh@439 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:02:28.244 14:43:01 -- common/autobuild_common.sh@440 -- $ date +%s 00:02:28.244 14:43:01 -- common/autobuild_common.sh@440 -- $ mktemp -dt spdk_1733064181.XXXXXX 00:02:28.244 14:43:01 -- common/autobuild_common.sh@440 -- $ SPDK_WORKSPACE=/tmp/spdk_1733064181.XnfBR2 00:02:28.244 14:43:01 -- common/autobuild_common.sh@442 -- $ [[ -n '' ]] 00:02:28.244 14:43:01 -- common/autobuild_common.sh@446 -- $ '[' -n v23.11 ']' 00:02:28.244 14:43:01 -- common/autobuild_common.sh@447 -- $ dirname /home/vagrant/spdk_repo/dpdk/build 00:02:28.244 14:43:01 -- common/autobuild_common.sh@447 -- $ scanbuild_exclude=' --exclude /home/vagrant/spdk_repo/dpdk' 00:02:28.244 14:43:01 -- common/autobuild_common.sh@453 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:02:28.244 14:43:01 -- common/autobuild_common.sh@455 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/dpdk --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:02:28.244 14:43:01 -- common/autobuild_common.sh@456 -- $ get_config_params 00:02:28.244 14:43:01 -- common/autotest_common.sh@397 -- $ xtrace_disable 00:02:28.244 14:43:01 -- common/autotest_common.sh@10 -- $ set +x 00:02:28.244 14:43:01 -- common/autobuild_common.sh@456 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-usdt --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-dpdk=/home/vagrant/spdk_repo/dpdk/build --with-avahi --with-golang' 00:02:28.244 14:43:01 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:02:28.244 14:43:01 -- spdk/autobuild.sh@12 -- $ umask 022 00:02:28.244 14:43:01 -- spdk/autobuild.sh@13 -- $ cd /home/vagrant/spdk_repo/spdk 00:02:28.244 14:43:01 -- spdk/autobuild.sh@16 -- $ date -u 00:02:28.244 Sun Dec 1 02:43:01 PM UTC 2024 00:02:28.244 14:43:01 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:02:28.244 LTS-67-gc13c99a5e 00:02:28.244 14:43:01 -- spdk/autobuild.sh@19 -- $ '[' 0 -eq 1 ']' 00:02:28.244 14:43:01 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:02:28.244 14:43:01 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:02:28.244 14:43:01 -- common/autotest_common.sh@1087 -- $ '[' 3 -le 1 ']' 00:02:28.244 14:43:01 -- common/autotest_common.sh@1093 -- $ xtrace_disable 00:02:28.244 14:43:01 -- common/autotest_common.sh@10 -- $ set +x 00:02:28.244 ************************************ 00:02:28.244 START TEST ubsan 00:02:28.244 ************************************ 00:02:28.244 using ubsan 00:02:28.244 14:43:01 -- common/autotest_common.sh@1114 -- $ echo 'using ubsan' 00:02:28.244 00:02:28.244 real 0m0.000s 00:02:28.244 user 0m0.000s 00:02:28.244 sys 0m0.000s 00:02:28.244 14:43:01 -- common/autotest_common.sh@1115 -- $ xtrace_disable 00:02:28.244 14:43:01 -- common/autotest_common.sh@10 -- $ set +x 00:02:28.244 ************************************ 00:02:28.244 END TEST ubsan 00:02:28.244 ************************************ 00:02:28.244 14:43:01 -- spdk/autobuild.sh@27 -- $ '[' -n v23.11 ']' 00:02:28.244 14:43:01 -- spdk/autobuild.sh@28 -- $ build_native_dpdk 00:02:28.244 14:43:01 -- common/autobuild_common.sh@432 -- $ run_test build_native_dpdk _build_native_dpdk 00:02:28.244 14:43:01 -- common/autotest_common.sh@1087 -- $ '[' 2 -le 1 ']' 00:02:28.244 14:43:01 -- common/autotest_common.sh@1093 -- $ xtrace_disable 00:02:28.244 14:43:01 -- common/autotest_common.sh@10 -- $ set +x 00:02:28.244 ************************************ 00:02:28.244 START TEST build_native_dpdk 00:02:28.244 ************************************ 00:02:28.244 14:43:01 -- common/autotest_common.sh@1114 -- $ _build_native_dpdk 00:02:28.244 14:43:01 -- common/autobuild_common.sh@48 -- $ local external_dpdk_dir 00:02:28.244 14:43:01 -- common/autobuild_common.sh@49 -- $ local external_dpdk_base_dir 00:02:28.244 14:43:01 -- common/autobuild_common.sh@50 -- $ local compiler_version 00:02:28.244 14:43:01 -- common/autobuild_common.sh@51 -- $ local compiler 00:02:28.244 14:43:01 -- common/autobuild_common.sh@52 -- $ local dpdk_kmods 00:02:28.244 14:43:01 -- common/autobuild_common.sh@53 -- $ local repo=dpdk 00:02:28.244 14:43:01 -- common/autobuild_common.sh@55 -- $ compiler=gcc 00:02:28.244 14:43:01 -- common/autobuild_common.sh@61 -- $ export CC=gcc 00:02:28.244 14:43:01 -- common/autobuild_common.sh@61 -- $ CC=gcc 00:02:28.244 14:43:01 -- common/autobuild_common.sh@63 -- $ [[ gcc != *clang* ]] 00:02:28.244 14:43:01 -- common/autobuild_common.sh@63 -- $ [[ gcc != *gcc* ]] 00:02:28.244 14:43:01 -- common/autobuild_common.sh@68 -- $ gcc -dumpversion 00:02:28.244 14:43:01 -- common/autobuild_common.sh@68 -- $ compiler_version=13 00:02:28.244 14:43:01 -- common/autobuild_common.sh@69 -- $ compiler_version=13 00:02:28.244 14:43:01 -- common/autobuild_common.sh@70 -- $ external_dpdk_dir=/home/vagrant/spdk_repo/dpdk/build 00:02:28.244 14:43:01 -- common/autobuild_common.sh@71 -- $ dirname /home/vagrant/spdk_repo/dpdk/build 00:02:28.244 14:43:01 -- common/autobuild_common.sh@71 -- $ external_dpdk_base_dir=/home/vagrant/spdk_repo/dpdk 00:02:28.244 14:43:01 -- common/autobuild_common.sh@73 -- $ [[ ! -d /home/vagrant/spdk_repo/dpdk ]] 00:02:28.244 14:43:01 -- common/autobuild_common.sh@82 -- $ orgdir=/home/vagrant/spdk_repo/spdk 00:02:28.244 14:43:01 -- common/autobuild_common.sh@83 -- $ git -C /home/vagrant/spdk_repo/dpdk log --oneline -n 5 00:02:28.244 eeb0605f11 version: 23.11.0 00:02:28.244 238778122a doc: update release notes for 23.11 00:02:28.244 46aa6b3cfc doc: fix description of RSS features 00:02:28.244 dd88f51a57 devtools: forbid DPDK API in cnxk base driver 00:02:28.244 7e421ae345 devtools: support skipping forbid rule check 00:02:28.244 14:43:01 -- common/autobuild_common.sh@85 -- $ dpdk_cflags='-fPIC -g -fcommon' 00:02:28.244 14:43:01 -- common/autobuild_common.sh@86 -- $ dpdk_ldflags= 00:02:28.244 14:43:01 -- common/autobuild_common.sh@87 -- $ dpdk_ver=23.11.0 00:02:28.244 14:43:01 -- common/autobuild_common.sh@89 -- $ [[ gcc == *gcc* ]] 00:02:28.244 14:43:01 -- common/autobuild_common.sh@89 -- $ [[ 13 -ge 5 ]] 00:02:28.244 14:43:01 -- common/autobuild_common.sh@90 -- $ dpdk_cflags+=' -Werror' 00:02:28.244 14:43:01 -- common/autobuild_common.sh@93 -- $ [[ gcc == *gcc* ]] 00:02:28.244 14:43:01 -- common/autobuild_common.sh@93 -- $ [[ 13 -ge 10 ]] 00:02:28.244 14:43:01 -- common/autobuild_common.sh@94 -- $ dpdk_cflags+=' -Wno-stringop-overflow' 00:02:28.244 14:43:01 -- common/autobuild_common.sh@100 -- $ DPDK_DRIVERS=("bus" "bus/pci" "bus/vdev" "mempool/ring" "net/i40e" "net/i40e/base") 00:02:28.244 14:43:01 -- common/autobuild_common.sh@102 -- $ local mlx5_libs_added=n 00:02:28.244 14:43:01 -- common/autobuild_common.sh@103 -- $ [[ 0 -eq 1 ]] 00:02:28.244 14:43:01 -- common/autobuild_common.sh@103 -- $ [[ 0 -eq 1 ]] 00:02:28.244 14:43:01 -- common/autobuild_common.sh@139 -- $ [[ 0 -eq 1 ]] 00:02:28.244 14:43:01 -- common/autobuild_common.sh@167 -- $ cd /home/vagrant/spdk_repo/dpdk 00:02:28.244 14:43:01 -- common/autobuild_common.sh@168 -- $ uname -s 00:02:28.244 14:43:01 -- common/autobuild_common.sh@168 -- $ '[' Linux = Linux ']' 00:02:28.244 14:43:01 -- common/autobuild_common.sh@169 -- $ lt 23.11.0 21.11.0 00:02:28.244 14:43:01 -- scripts/common.sh@372 -- $ cmp_versions 23.11.0 '<' 21.11.0 00:02:28.244 14:43:01 -- scripts/common.sh@332 -- $ local ver1 ver1_l 00:02:28.244 14:43:01 -- scripts/common.sh@333 -- $ local ver2 ver2_l 00:02:28.244 14:43:01 -- scripts/common.sh@335 -- $ IFS=.-: 00:02:28.244 14:43:01 -- scripts/common.sh@335 -- $ read -ra ver1 00:02:28.244 14:43:01 -- scripts/common.sh@336 -- $ IFS=.-: 00:02:28.244 14:43:01 -- scripts/common.sh@336 -- $ read -ra ver2 00:02:28.244 14:43:01 -- scripts/common.sh@337 -- $ local 'op=<' 00:02:28.244 14:43:01 -- scripts/common.sh@339 -- $ ver1_l=3 00:02:28.244 14:43:01 -- scripts/common.sh@340 -- $ ver2_l=3 00:02:28.244 14:43:01 -- scripts/common.sh@342 -- $ local lt=0 gt=0 eq=0 v 00:02:28.244 14:43:01 -- scripts/common.sh@343 -- $ case "$op" in 00:02:28.244 14:43:01 -- scripts/common.sh@344 -- $ : 1 00:02:28.244 14:43:01 -- scripts/common.sh@363 -- $ (( v = 0 )) 00:02:28.244 14:43:01 -- scripts/common.sh@363 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:02:28.244 14:43:01 -- scripts/common.sh@364 -- $ decimal 23 00:02:28.244 14:43:01 -- scripts/common.sh@352 -- $ local d=23 00:02:28.244 14:43:01 -- scripts/common.sh@353 -- $ [[ 23 =~ ^[0-9]+$ ]] 00:02:28.244 14:43:01 -- scripts/common.sh@354 -- $ echo 23 00:02:28.244 14:43:01 -- scripts/common.sh@364 -- $ ver1[v]=23 00:02:28.244 14:43:01 -- scripts/common.sh@365 -- $ decimal 21 00:02:28.244 14:43:01 -- scripts/common.sh@352 -- $ local d=21 00:02:28.244 14:43:01 -- scripts/common.sh@353 -- $ [[ 21 =~ ^[0-9]+$ ]] 00:02:28.244 14:43:01 -- scripts/common.sh@354 -- $ echo 21 00:02:28.244 14:43:01 -- scripts/common.sh@365 -- $ ver2[v]=21 00:02:28.244 14:43:01 -- scripts/common.sh@366 -- $ (( ver1[v] > ver2[v] )) 00:02:28.244 14:43:01 -- scripts/common.sh@366 -- $ return 1 00:02:28.244 14:43:01 -- common/autobuild_common.sh@173 -- $ patch -p1 00:02:28.244 patching file config/rte_config.h 00:02:28.244 Hunk #1 succeeded at 60 (offset 1 line). 00:02:28.245 14:43:01 -- common/autobuild_common.sh@176 -- $ lt 23.11.0 24.07.0 00:02:28.245 14:43:01 -- scripts/common.sh@372 -- $ cmp_versions 23.11.0 '<' 24.07.0 00:02:28.245 14:43:01 -- scripts/common.sh@332 -- $ local ver1 ver1_l 00:02:28.245 14:43:01 -- scripts/common.sh@333 -- $ local ver2 ver2_l 00:02:28.245 14:43:01 -- scripts/common.sh@335 -- $ IFS=.-: 00:02:28.245 14:43:01 -- scripts/common.sh@335 -- $ read -ra ver1 00:02:28.245 14:43:01 -- scripts/common.sh@336 -- $ IFS=.-: 00:02:28.245 14:43:01 -- scripts/common.sh@336 -- $ read -ra ver2 00:02:28.245 14:43:01 -- scripts/common.sh@337 -- $ local 'op=<' 00:02:28.245 14:43:01 -- scripts/common.sh@339 -- $ ver1_l=3 00:02:28.245 14:43:01 -- scripts/common.sh@340 -- $ ver2_l=3 00:02:28.245 14:43:01 -- scripts/common.sh@342 -- $ local lt=0 gt=0 eq=0 v 00:02:28.245 14:43:01 -- scripts/common.sh@343 -- $ case "$op" in 00:02:28.245 14:43:01 -- scripts/common.sh@344 -- $ : 1 00:02:28.245 14:43:01 -- scripts/common.sh@363 -- $ (( v = 0 )) 00:02:28.245 14:43:01 -- scripts/common.sh@363 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:02:28.245 14:43:01 -- scripts/common.sh@364 -- $ decimal 23 00:02:28.245 14:43:01 -- scripts/common.sh@352 -- $ local d=23 00:02:28.245 14:43:01 -- scripts/common.sh@353 -- $ [[ 23 =~ ^[0-9]+$ ]] 00:02:28.245 14:43:01 -- scripts/common.sh@354 -- $ echo 23 00:02:28.245 14:43:01 -- scripts/common.sh@364 -- $ ver1[v]=23 00:02:28.245 14:43:01 -- scripts/common.sh@365 -- $ decimal 24 00:02:28.245 14:43:01 -- scripts/common.sh@352 -- $ local d=24 00:02:28.245 14:43:01 -- scripts/common.sh@353 -- $ [[ 24 =~ ^[0-9]+$ ]] 00:02:28.245 14:43:01 -- scripts/common.sh@354 -- $ echo 24 00:02:28.245 14:43:01 -- scripts/common.sh@365 -- $ ver2[v]=24 00:02:28.245 14:43:01 -- scripts/common.sh@366 -- $ (( ver1[v] > ver2[v] )) 00:02:28.245 14:43:01 -- scripts/common.sh@367 -- $ (( ver1[v] < ver2[v] )) 00:02:28.245 14:43:01 -- scripts/common.sh@367 -- $ return 0 00:02:28.245 14:43:01 -- common/autobuild_common.sh@177 -- $ patch -p1 00:02:28.245 patching file lib/pcapng/rte_pcapng.c 00:02:28.245 14:43:01 -- common/autobuild_common.sh@180 -- $ dpdk_kmods=false 00:02:28.245 14:43:01 -- common/autobuild_common.sh@181 -- $ uname -s 00:02:28.245 14:43:01 -- common/autobuild_common.sh@181 -- $ '[' Linux = FreeBSD ']' 00:02:28.245 14:43:01 -- common/autobuild_common.sh@185 -- $ printf %s, bus bus/pci bus/vdev mempool/ring net/i40e net/i40e/base 00:02:28.245 14:43:01 -- common/autobuild_common.sh@185 -- $ meson build-tmp --prefix=/home/vagrant/spdk_repo/dpdk/build --libdir lib -Denable_docs=false -Denable_kmods=false -Dtests=false -Dc_link_args= '-Dc_args=-fPIC -g -fcommon -Werror -Wno-stringop-overflow' -Dmachine=native -Denable_drivers=bus,bus/pci,bus/vdev,mempool/ring,net/i40e,net/i40e/base, 00:02:34.807 The Meson build system 00:02:34.807 Version: 1.5.0 00:02:34.807 Source dir: /home/vagrant/spdk_repo/dpdk 00:02:34.807 Build dir: /home/vagrant/spdk_repo/dpdk/build-tmp 00:02:34.807 Build type: native build 00:02:34.807 Program cat found: YES (/usr/bin/cat) 00:02:34.807 Project name: DPDK 00:02:34.807 Project version: 23.11.0 00:02:34.807 C compiler for the host machine: gcc (gcc 13.3.1 "gcc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:02:34.807 C linker for the host machine: gcc ld.bfd 2.40-14 00:02:34.807 Host machine cpu family: x86_64 00:02:34.807 Host machine cpu: x86_64 00:02:34.807 Message: ## Building in Developer Mode ## 00:02:34.807 Program pkg-config found: YES (/usr/bin/pkg-config) 00:02:34.808 Program check-symbols.sh found: YES (/home/vagrant/spdk_repo/dpdk/buildtools/check-symbols.sh) 00:02:34.808 Program options-ibverbs-static.sh found: YES (/home/vagrant/spdk_repo/dpdk/buildtools/options-ibverbs-static.sh) 00:02:34.808 Program python3 found: YES (/usr/bin/python3) 00:02:34.808 Program cat found: YES (/usr/bin/cat) 00:02:34.808 config/meson.build:113: WARNING: The "machine" option is deprecated. Please use "cpu_instruction_set" instead. 00:02:34.808 Compiler for C supports arguments -march=native: YES 00:02:34.808 Checking for size of "void *" : 8 00:02:34.808 Checking for size of "void *" : 8 (cached) 00:02:34.808 Library m found: YES 00:02:34.808 Library numa found: YES 00:02:34.808 Has header "numaif.h" : YES 00:02:34.808 Library fdt found: NO 00:02:34.808 Library execinfo found: NO 00:02:34.808 Has header "execinfo.h" : YES 00:02:34.808 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:02:34.808 Run-time dependency libarchive found: NO (tried pkgconfig) 00:02:34.808 Run-time dependency libbsd found: NO (tried pkgconfig) 00:02:34.808 Run-time dependency jansson found: NO (tried pkgconfig) 00:02:34.808 Run-time dependency openssl found: YES 3.1.1 00:02:34.808 Run-time dependency libpcap found: YES 1.10.4 00:02:34.808 Has header "pcap.h" with dependency libpcap: YES 00:02:34.808 Compiler for C supports arguments -Wcast-qual: YES 00:02:34.808 Compiler for C supports arguments -Wdeprecated: YES 00:02:34.808 Compiler for C supports arguments -Wformat: YES 00:02:34.808 Compiler for C supports arguments -Wformat-nonliteral: NO 00:02:34.808 Compiler for C supports arguments -Wformat-security: NO 00:02:34.808 Compiler for C supports arguments -Wmissing-declarations: YES 00:02:34.808 Compiler for C supports arguments -Wmissing-prototypes: YES 00:02:34.808 Compiler for C supports arguments -Wnested-externs: YES 00:02:34.808 Compiler for C supports arguments -Wold-style-definition: YES 00:02:34.808 Compiler for C supports arguments -Wpointer-arith: YES 00:02:34.808 Compiler for C supports arguments -Wsign-compare: YES 00:02:34.808 Compiler for C supports arguments -Wstrict-prototypes: YES 00:02:34.808 Compiler for C supports arguments -Wundef: YES 00:02:34.808 Compiler for C supports arguments -Wwrite-strings: YES 00:02:34.808 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:02:34.808 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:02:34.808 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:02:34.808 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:02:34.808 Program objdump found: YES (/usr/bin/objdump) 00:02:34.808 Compiler for C supports arguments -mavx512f: YES 00:02:34.808 Checking if "AVX512 checking" compiles: YES 00:02:34.808 Fetching value of define "__SSE4_2__" : 1 00:02:34.808 Fetching value of define "__AES__" : 1 00:02:34.808 Fetching value of define "__AVX__" : 1 00:02:34.808 Fetching value of define "__AVX2__" : 1 00:02:34.808 Fetching value of define "__AVX512BW__" : (undefined) 00:02:34.808 Fetching value of define "__AVX512CD__" : (undefined) 00:02:34.808 Fetching value of define "__AVX512DQ__" : (undefined) 00:02:34.808 Fetching value of define "__AVX512F__" : (undefined) 00:02:34.808 Fetching value of define "__AVX512VL__" : (undefined) 00:02:34.808 Fetching value of define "__PCLMUL__" : 1 00:02:34.808 Fetching value of define "__RDRND__" : 1 00:02:34.808 Fetching value of define "__RDSEED__" : 1 00:02:34.808 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:02:34.808 Fetching value of define "__znver1__" : (undefined) 00:02:34.808 Fetching value of define "__znver2__" : (undefined) 00:02:34.808 Fetching value of define "__znver3__" : (undefined) 00:02:34.808 Fetching value of define "__znver4__" : (undefined) 00:02:34.808 Compiler for C supports arguments -Wno-format-truncation: YES 00:02:34.808 Message: lib/log: Defining dependency "log" 00:02:34.808 Message: lib/kvargs: Defining dependency "kvargs" 00:02:34.808 Message: lib/telemetry: Defining dependency "telemetry" 00:02:34.808 Checking for function "getentropy" : NO 00:02:34.808 Message: lib/eal: Defining dependency "eal" 00:02:34.808 Message: lib/ring: Defining dependency "ring" 00:02:34.808 Message: lib/rcu: Defining dependency "rcu" 00:02:34.808 Message: lib/mempool: Defining dependency "mempool" 00:02:34.808 Message: lib/mbuf: Defining dependency "mbuf" 00:02:34.808 Fetching value of define "__PCLMUL__" : 1 (cached) 00:02:34.808 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:02:34.808 Compiler for C supports arguments -mpclmul: YES 00:02:34.808 Compiler for C supports arguments -maes: YES 00:02:34.808 Compiler for C supports arguments -mavx512f: YES (cached) 00:02:34.808 Compiler for C supports arguments -mavx512bw: YES 00:02:34.808 Compiler for C supports arguments -mavx512dq: YES 00:02:34.808 Compiler for C supports arguments -mavx512vl: YES 00:02:34.808 Compiler for C supports arguments -mvpclmulqdq: YES 00:02:34.808 Compiler for C supports arguments -mavx2: YES 00:02:34.808 Compiler for C supports arguments -mavx: YES 00:02:34.808 Message: lib/net: Defining dependency "net" 00:02:34.808 Message: lib/meter: Defining dependency "meter" 00:02:34.808 Message: lib/ethdev: Defining dependency "ethdev" 00:02:34.808 Message: lib/pci: Defining dependency "pci" 00:02:34.808 Message: lib/cmdline: Defining dependency "cmdline" 00:02:34.808 Message: lib/metrics: Defining dependency "metrics" 00:02:34.808 Message: lib/hash: Defining dependency "hash" 00:02:34.808 Message: lib/timer: Defining dependency "timer" 00:02:34.808 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:02:34.808 Fetching value of define "__AVX512VL__" : (undefined) (cached) 00:02:34.808 Fetching value of define "__AVX512CD__" : (undefined) (cached) 00:02:34.808 Fetching value of define "__AVX512BW__" : (undefined) (cached) 00:02:34.808 Compiler for C supports arguments -mavx512f -mavx512vl -mavx512cd -mavx512bw: YES 00:02:34.808 Message: lib/acl: Defining dependency "acl" 00:02:34.808 Message: lib/bbdev: Defining dependency "bbdev" 00:02:34.808 Message: lib/bitratestats: Defining dependency "bitratestats" 00:02:34.808 Run-time dependency libelf found: YES 0.191 00:02:34.808 Message: lib/bpf: Defining dependency "bpf" 00:02:34.808 Message: lib/cfgfile: Defining dependency "cfgfile" 00:02:34.808 Message: lib/compressdev: Defining dependency "compressdev" 00:02:34.808 Message: lib/cryptodev: Defining dependency "cryptodev" 00:02:34.808 Message: lib/distributor: Defining dependency "distributor" 00:02:34.808 Message: lib/dmadev: Defining dependency "dmadev" 00:02:34.808 Message: lib/efd: Defining dependency "efd" 00:02:34.808 Message: lib/eventdev: Defining dependency "eventdev" 00:02:34.808 Message: lib/dispatcher: Defining dependency "dispatcher" 00:02:34.808 Message: lib/gpudev: Defining dependency "gpudev" 00:02:34.808 Message: lib/gro: Defining dependency "gro" 00:02:34.808 Message: lib/gso: Defining dependency "gso" 00:02:34.808 Message: lib/ip_frag: Defining dependency "ip_frag" 00:02:34.808 Message: lib/jobstats: Defining dependency "jobstats" 00:02:34.808 Message: lib/latencystats: Defining dependency "latencystats" 00:02:34.808 Message: lib/lpm: Defining dependency "lpm" 00:02:34.808 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:02:34.808 Fetching value of define "__AVX512DQ__" : (undefined) (cached) 00:02:34.808 Fetching value of define "__AVX512IFMA__" : (undefined) 00:02:34.808 Compiler for C supports arguments -mavx512f -mavx512dq -mavx512ifma: YES 00:02:34.808 Message: lib/member: Defining dependency "member" 00:02:34.808 Message: lib/pcapng: Defining dependency "pcapng" 00:02:34.808 Compiler for C supports arguments -Wno-cast-qual: YES 00:02:34.808 Message: lib/power: Defining dependency "power" 00:02:34.808 Message: lib/rawdev: Defining dependency "rawdev" 00:02:34.808 Message: lib/regexdev: Defining dependency "regexdev" 00:02:34.808 Message: lib/mldev: Defining dependency "mldev" 00:02:34.808 Message: lib/rib: Defining dependency "rib" 00:02:34.808 Message: lib/reorder: Defining dependency "reorder" 00:02:34.808 Message: lib/sched: Defining dependency "sched" 00:02:34.808 Message: lib/security: Defining dependency "security" 00:02:34.808 Message: lib/stack: Defining dependency "stack" 00:02:34.808 Has header "linux/userfaultfd.h" : YES 00:02:34.808 Has header "linux/vduse.h" : YES 00:02:34.808 Message: lib/vhost: Defining dependency "vhost" 00:02:34.808 Message: lib/ipsec: Defining dependency "ipsec" 00:02:34.808 Message: lib/pdcp: Defining dependency "pdcp" 00:02:34.808 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:02:34.808 Fetching value of define "__AVX512DQ__" : (undefined) (cached) 00:02:34.808 Compiler for C supports arguments -mavx512f -mavx512dq: YES 00:02:34.808 Compiler for C supports arguments -mavx512bw: YES (cached) 00:02:34.808 Message: lib/fib: Defining dependency "fib" 00:02:34.808 Message: lib/port: Defining dependency "port" 00:02:34.808 Message: lib/pdump: Defining dependency "pdump" 00:02:34.808 Message: lib/table: Defining dependency "table" 00:02:34.808 Message: lib/pipeline: Defining dependency "pipeline" 00:02:34.808 Message: lib/graph: Defining dependency "graph" 00:02:34.808 Message: lib/node: Defining dependency "node" 00:02:34.808 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:02:35.746 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:02:35.746 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:02:35.746 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:02:35.746 Compiler for C supports arguments -Wno-sign-compare: YES 00:02:35.746 Compiler for C supports arguments -Wno-unused-value: YES 00:02:35.746 Compiler for C supports arguments -Wno-format: YES 00:02:35.746 Compiler for C supports arguments -Wno-format-security: YES 00:02:35.746 Compiler for C supports arguments -Wno-format-nonliteral: YES 00:02:35.746 Compiler for C supports arguments -Wno-strict-aliasing: YES 00:02:35.746 Compiler for C supports arguments -Wno-unused-but-set-variable: YES 00:02:35.746 Compiler for C supports arguments -Wno-unused-parameter: YES 00:02:35.746 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:02:35.746 Compiler for C supports arguments -mavx512f: YES (cached) 00:02:35.746 Compiler for C supports arguments -mavx512bw: YES (cached) 00:02:35.746 Compiler for C supports arguments -march=skylake-avx512: YES 00:02:35.746 Message: drivers/net/i40e: Defining dependency "net_i40e" 00:02:35.746 Has header "sys/epoll.h" : YES 00:02:35.746 Program doxygen found: YES (/usr/local/bin/doxygen) 00:02:35.746 Configuring doxy-api-html.conf using configuration 00:02:35.746 Configuring doxy-api-man.conf using configuration 00:02:35.746 Program mandb found: YES (/usr/bin/mandb) 00:02:35.746 Program sphinx-build found: NO 00:02:35.746 Configuring rte_build_config.h using configuration 00:02:35.746 Message: 00:02:35.746 ================= 00:02:35.746 Applications Enabled 00:02:35.746 ================= 00:02:35.746 00:02:35.746 apps: 00:02:35.746 dumpcap, graph, pdump, proc-info, test-acl, test-bbdev, test-cmdline, test-compress-perf, 00:02:35.746 test-crypto-perf, test-dma-perf, test-eventdev, test-fib, test-flow-perf, test-gpudev, test-mldev, test-pipeline, 00:02:35.746 test-pmd, test-regex, test-sad, test-security-perf, 00:02:35.746 00:02:35.746 Message: 00:02:35.746 ================= 00:02:35.746 Libraries Enabled 00:02:35.746 ================= 00:02:35.746 00:02:35.746 libs: 00:02:35.746 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:02:35.746 net, meter, ethdev, pci, cmdline, metrics, hash, timer, 00:02:35.746 acl, bbdev, bitratestats, bpf, cfgfile, compressdev, cryptodev, distributor, 00:02:35.746 dmadev, efd, eventdev, dispatcher, gpudev, gro, gso, ip_frag, 00:02:35.746 jobstats, latencystats, lpm, member, pcapng, power, rawdev, regexdev, 00:02:35.746 mldev, rib, reorder, sched, security, stack, vhost, ipsec, 00:02:35.746 pdcp, fib, port, pdump, table, pipeline, graph, node, 00:02:35.746 00:02:35.746 00:02:35.746 Message: 00:02:35.746 =============== 00:02:35.746 Drivers Enabled 00:02:35.746 =============== 00:02:35.746 00:02:35.746 common: 00:02:35.746 00:02:35.746 bus: 00:02:35.746 pci, vdev, 00:02:35.746 mempool: 00:02:35.746 ring, 00:02:35.746 dma: 00:02:35.746 00:02:35.746 net: 00:02:35.746 i40e, 00:02:35.746 raw: 00:02:35.747 00:02:35.747 crypto: 00:02:35.747 00:02:35.747 compress: 00:02:35.747 00:02:35.747 regex: 00:02:35.747 00:02:35.747 ml: 00:02:35.747 00:02:35.747 vdpa: 00:02:35.747 00:02:35.747 event: 00:02:35.747 00:02:35.747 baseband: 00:02:35.747 00:02:35.747 gpu: 00:02:35.747 00:02:35.747 00:02:35.747 Message: 00:02:35.747 ================= 00:02:35.747 Content Skipped 00:02:35.747 ================= 00:02:35.747 00:02:35.747 apps: 00:02:35.747 00:02:35.747 libs: 00:02:35.747 00:02:35.747 drivers: 00:02:35.747 common/cpt: not in enabled drivers build config 00:02:35.747 common/dpaax: not in enabled drivers build config 00:02:35.747 common/iavf: not in enabled drivers build config 00:02:35.747 common/idpf: not in enabled drivers build config 00:02:35.747 common/mvep: not in enabled drivers build config 00:02:35.747 common/octeontx: not in enabled drivers build config 00:02:35.747 bus/auxiliary: not in enabled drivers build config 00:02:35.747 bus/cdx: not in enabled drivers build config 00:02:35.747 bus/dpaa: not in enabled drivers build config 00:02:35.747 bus/fslmc: not in enabled drivers build config 00:02:35.747 bus/ifpga: not in enabled drivers build config 00:02:35.747 bus/platform: not in enabled drivers build config 00:02:35.747 bus/vmbus: not in enabled drivers build config 00:02:35.747 common/cnxk: not in enabled drivers build config 00:02:35.747 common/mlx5: not in enabled drivers build config 00:02:35.747 common/nfp: not in enabled drivers build config 00:02:35.747 common/qat: not in enabled drivers build config 00:02:35.747 common/sfc_efx: not in enabled drivers build config 00:02:35.747 mempool/bucket: not in enabled drivers build config 00:02:35.747 mempool/cnxk: not in enabled drivers build config 00:02:35.747 mempool/dpaa: not in enabled drivers build config 00:02:35.747 mempool/dpaa2: not in enabled drivers build config 00:02:35.747 mempool/octeontx: not in enabled drivers build config 00:02:35.747 mempool/stack: not in enabled drivers build config 00:02:35.747 dma/cnxk: not in enabled drivers build config 00:02:35.747 dma/dpaa: not in enabled drivers build config 00:02:35.747 dma/dpaa2: not in enabled drivers build config 00:02:35.747 dma/hisilicon: not in enabled drivers build config 00:02:35.747 dma/idxd: not in enabled drivers build config 00:02:35.747 dma/ioat: not in enabled drivers build config 00:02:35.747 dma/skeleton: not in enabled drivers build config 00:02:35.747 net/af_packet: not in enabled drivers build config 00:02:35.747 net/af_xdp: not in enabled drivers build config 00:02:35.747 net/ark: not in enabled drivers build config 00:02:35.747 net/atlantic: not in enabled drivers build config 00:02:35.747 net/avp: not in enabled drivers build config 00:02:35.747 net/axgbe: not in enabled drivers build config 00:02:35.747 net/bnx2x: not in enabled drivers build config 00:02:35.747 net/bnxt: not in enabled drivers build config 00:02:35.747 net/bonding: not in enabled drivers build config 00:02:35.747 net/cnxk: not in enabled drivers build config 00:02:35.747 net/cpfl: not in enabled drivers build config 00:02:35.747 net/cxgbe: not in enabled drivers build config 00:02:35.747 net/dpaa: not in enabled drivers build config 00:02:35.747 net/dpaa2: not in enabled drivers build config 00:02:35.747 net/e1000: not in enabled drivers build config 00:02:35.747 net/ena: not in enabled drivers build config 00:02:35.747 net/enetc: not in enabled drivers build config 00:02:35.747 net/enetfec: not in enabled drivers build config 00:02:35.747 net/enic: not in enabled drivers build config 00:02:35.747 net/failsafe: not in enabled drivers build config 00:02:35.747 net/fm10k: not in enabled drivers build config 00:02:35.747 net/gve: not in enabled drivers build config 00:02:35.747 net/hinic: not in enabled drivers build config 00:02:35.747 net/hns3: not in enabled drivers build config 00:02:35.747 net/iavf: not in enabled drivers build config 00:02:35.747 net/ice: not in enabled drivers build config 00:02:35.747 net/idpf: not in enabled drivers build config 00:02:35.747 net/igc: not in enabled drivers build config 00:02:35.747 net/ionic: not in enabled drivers build config 00:02:35.747 net/ipn3ke: not in enabled drivers build config 00:02:35.747 net/ixgbe: not in enabled drivers build config 00:02:35.747 net/mana: not in enabled drivers build config 00:02:35.747 net/memif: not in enabled drivers build config 00:02:35.747 net/mlx4: not in enabled drivers build config 00:02:35.747 net/mlx5: not in enabled drivers build config 00:02:35.747 net/mvneta: not in enabled drivers build config 00:02:35.747 net/mvpp2: not in enabled drivers build config 00:02:35.747 net/netvsc: not in enabled drivers build config 00:02:35.747 net/nfb: not in enabled drivers build config 00:02:35.747 net/nfp: not in enabled drivers build config 00:02:35.747 net/ngbe: not in enabled drivers build config 00:02:35.747 net/null: not in enabled drivers build config 00:02:35.747 net/octeontx: not in enabled drivers build config 00:02:35.747 net/octeon_ep: not in enabled drivers build config 00:02:35.747 net/pcap: not in enabled drivers build config 00:02:35.747 net/pfe: not in enabled drivers build config 00:02:35.747 net/qede: not in enabled drivers build config 00:02:35.747 net/ring: not in enabled drivers build config 00:02:35.747 net/sfc: not in enabled drivers build config 00:02:35.747 net/softnic: not in enabled drivers build config 00:02:35.747 net/tap: not in enabled drivers build config 00:02:35.747 net/thunderx: not in enabled drivers build config 00:02:35.747 net/txgbe: not in enabled drivers build config 00:02:35.747 net/vdev_netvsc: not in enabled drivers build config 00:02:35.747 net/vhost: not in enabled drivers build config 00:02:35.747 net/virtio: not in enabled drivers build config 00:02:35.747 net/vmxnet3: not in enabled drivers build config 00:02:35.747 raw/cnxk_bphy: not in enabled drivers build config 00:02:35.747 raw/cnxk_gpio: not in enabled drivers build config 00:02:35.747 raw/dpaa2_cmdif: not in enabled drivers build config 00:02:35.747 raw/ifpga: not in enabled drivers build config 00:02:35.747 raw/ntb: not in enabled drivers build config 00:02:35.747 raw/skeleton: not in enabled drivers build config 00:02:35.747 crypto/armv8: not in enabled drivers build config 00:02:35.747 crypto/bcmfs: not in enabled drivers build config 00:02:35.747 crypto/caam_jr: not in enabled drivers build config 00:02:35.747 crypto/ccp: not in enabled drivers build config 00:02:35.747 crypto/cnxk: not in enabled drivers build config 00:02:35.747 crypto/dpaa_sec: not in enabled drivers build config 00:02:35.747 crypto/dpaa2_sec: not in enabled drivers build config 00:02:35.747 crypto/ipsec_mb: not in enabled drivers build config 00:02:35.747 crypto/mlx5: not in enabled drivers build config 00:02:35.747 crypto/mvsam: not in enabled drivers build config 00:02:35.747 crypto/nitrox: not in enabled drivers build config 00:02:35.747 crypto/null: not in enabled drivers build config 00:02:35.747 crypto/octeontx: not in enabled drivers build config 00:02:35.747 crypto/openssl: not in enabled drivers build config 00:02:35.747 crypto/scheduler: not in enabled drivers build config 00:02:35.747 crypto/uadk: not in enabled drivers build config 00:02:35.747 crypto/virtio: not in enabled drivers build config 00:02:35.747 compress/isal: not in enabled drivers build config 00:02:35.747 compress/mlx5: not in enabled drivers build config 00:02:35.747 compress/octeontx: not in enabled drivers build config 00:02:35.747 compress/zlib: not in enabled drivers build config 00:02:35.747 regex/mlx5: not in enabled drivers build config 00:02:35.747 regex/cn9k: not in enabled drivers build config 00:02:35.747 ml/cnxk: not in enabled drivers build config 00:02:35.747 vdpa/ifc: not in enabled drivers build config 00:02:35.747 vdpa/mlx5: not in enabled drivers build config 00:02:35.747 vdpa/nfp: not in enabled drivers build config 00:02:35.747 vdpa/sfc: not in enabled drivers build config 00:02:35.747 event/cnxk: not in enabled drivers build config 00:02:35.747 event/dlb2: not in enabled drivers build config 00:02:35.747 event/dpaa: not in enabled drivers build config 00:02:35.747 event/dpaa2: not in enabled drivers build config 00:02:35.747 event/dsw: not in enabled drivers build config 00:02:35.747 event/opdl: not in enabled drivers build config 00:02:35.747 event/skeleton: not in enabled drivers build config 00:02:35.747 event/sw: not in enabled drivers build config 00:02:35.747 event/octeontx: not in enabled drivers build config 00:02:35.747 baseband/acc: not in enabled drivers build config 00:02:35.747 baseband/fpga_5gnr_fec: not in enabled drivers build config 00:02:35.747 baseband/fpga_lte_fec: not in enabled drivers build config 00:02:35.747 baseband/la12xx: not in enabled drivers build config 00:02:35.747 baseband/null: not in enabled drivers build config 00:02:35.747 baseband/turbo_sw: not in enabled drivers build config 00:02:35.747 gpu/cuda: not in enabled drivers build config 00:02:35.747 00:02:35.747 00:02:35.747 Build targets in project: 220 00:02:35.747 00:02:35.747 DPDK 23.11.0 00:02:35.747 00:02:35.747 User defined options 00:02:35.747 libdir : lib 00:02:35.747 prefix : /home/vagrant/spdk_repo/dpdk/build 00:02:35.747 c_args : -fPIC -g -fcommon -Werror -Wno-stringop-overflow 00:02:35.747 c_link_args : 00:02:35.747 enable_docs : false 00:02:35.747 enable_drivers: bus,bus/pci,bus/vdev,mempool/ring,net/i40e,net/i40e/base, 00:02:35.747 enable_kmods : false 00:02:35.747 machine : native 00:02:35.747 tests : false 00:02:35.747 00:02:35.747 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:02:35.747 WARNING: Running the setup command as `meson [options]` instead of `meson setup [options]` is ambiguous and deprecated. 00:02:35.747 14:43:08 -- common/autobuild_common.sh@189 -- $ ninja -C /home/vagrant/spdk_repo/dpdk/build-tmp -j10 00:02:35.747 ninja: Entering directory `/home/vagrant/spdk_repo/dpdk/build-tmp' 00:02:35.747 [1/710] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:02:35.747 [2/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:02:35.747 [3/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:02:35.747 [4/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:02:35.747 [5/710] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:02:35.747 [6/710] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:02:35.747 [7/710] Linking static target lib/librte_kvargs.a 00:02:36.025 [8/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:02:36.025 [9/710] Compiling C object lib/librte_log.a.p/log_log.c.o 00:02:36.025 [10/710] Linking static target lib/librte_log.a 00:02:36.025 [11/710] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:02:36.295 [12/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:02:36.295 [13/710] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:02:36.295 [14/710] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:02:36.295 [15/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:02:36.295 [16/710] Linking target lib/librte_log.so.24.0 00:02:36.295 [17/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:02:36.553 [18/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:02:36.553 [19/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:02:36.553 [20/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:02:36.811 [21/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:02:36.811 [22/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:02:36.811 [23/710] Generating symbol file lib/librte_log.so.24.0.p/librte_log.so.24.0.symbols 00:02:36.811 [24/710] Linking target lib/librte_kvargs.so.24.0 00:02:36.812 [25/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:02:36.812 [26/710] Generating symbol file lib/librte_kvargs.so.24.0.p/librte_kvargs.so.24.0.symbols 00:02:36.812 [27/710] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:02:37.070 [28/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:02:37.070 [29/710] Linking static target lib/librte_telemetry.a 00:02:37.070 [30/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:02:37.070 [31/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:02:37.070 [32/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:02:37.329 [33/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:02:37.329 [34/710] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:02:37.329 [35/710] Linking target lib/librte_telemetry.so.24.0 00:02:37.329 [36/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:02:37.329 [37/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:02:37.329 [38/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:02:37.329 [39/710] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:02:37.329 [40/710] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:02:37.329 [41/710] Generating symbol file lib/librte_telemetry.so.24.0.p/librte_telemetry.so.24.0.symbols 00:02:37.329 [42/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:02:37.329 [43/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:02:37.587 [44/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:02:37.587 [45/710] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:02:37.846 [46/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:02:37.846 [47/710] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:02:37.846 [48/710] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:02:38.104 [49/710] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:02:38.104 [50/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:02:38.104 [51/710] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:02:38.104 [52/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:02:38.104 [53/710] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:02:38.104 [54/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:02:38.363 [55/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:02:38.363 [56/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:02:38.363 [57/710] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:02:38.363 [58/710] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:02:38.363 [59/710] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:02:38.363 [60/710] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:02:38.363 [61/710] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:02:38.363 [62/710] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:02:38.621 [63/710] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:02:38.621 [64/710] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:02:38.621 [65/710] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:02:38.621 [66/710] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:02:38.879 [67/710] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:02:38.879 [68/710] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:02:38.879 [69/710] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:02:38.879 [70/710] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:02:38.879 [71/710] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:02:39.137 [72/710] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:02:39.137 [73/710] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:02:39.137 [74/710] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:02:39.137 [75/710] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:02:39.137 [76/710] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:02:39.137 [77/710] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:02:39.395 [78/710] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:02:39.395 [79/710] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:02:39.395 [80/710] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:02:39.395 [81/710] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:02:39.654 [82/710] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:02:39.654 [83/710] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:02:39.654 [84/710] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:02:39.654 [85/710] Linking static target lib/librte_ring.a 00:02:39.912 [86/710] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:02:39.912 [87/710] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:02:39.912 [88/710] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:02:39.912 [89/710] Linking static target lib/librte_eal.a 00:02:39.912 [90/710] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:02:40.171 [91/710] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:02:40.171 [92/710] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:02:40.171 [93/710] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:02:40.171 [94/710] Linking static target lib/librte_mempool.a 00:02:40.171 [95/710] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:02:40.171 [96/710] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:02:40.171 [97/710] Linking static target lib/librte_rcu.a 00:02:40.429 [98/710] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:02:40.429 [99/710] Linking static target lib/net/libnet_crc_avx512_lib.a 00:02:40.429 [100/710] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:02:40.687 [101/710] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:02:40.687 [102/710] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:02:40.687 [103/710] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:02:40.687 [104/710] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:02:40.687 [105/710] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:02:40.946 [106/710] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:02:40.946 [107/710] Linking static target lib/librte_mbuf.a 00:02:40.946 [108/710] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:02:40.946 [109/710] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:02:40.946 [110/710] Linking static target lib/librte_net.a 00:02:41.205 [111/710] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:02:41.205 [112/710] Linking static target lib/librte_meter.a 00:02:41.205 [113/710] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:02:41.205 [114/710] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:02:41.464 [115/710] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:02:41.464 [116/710] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:02:41.464 [117/710] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:02:41.464 [118/710] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:02:41.464 [119/710] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:02:42.030 [120/710] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:02:42.030 [121/710] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:02:42.030 [122/710] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:02:42.288 [123/710] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:02:42.288 [124/710] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:02:42.288 [125/710] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:02:42.288 [126/710] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:02:42.288 [127/710] Linking static target lib/librte_pci.a 00:02:42.288 [128/710] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:02:42.546 [129/710] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:02:42.546 [130/710] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:02:42.546 [131/710] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:42.546 [132/710] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:02:42.546 [133/710] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:02:42.805 [134/710] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:02:42.805 [135/710] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:02:42.805 [136/710] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:02:42.805 [137/710] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:02:42.805 [138/710] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:02:42.805 [139/710] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:02:42.805 [140/710] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:02:43.063 [141/710] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:02:43.063 [142/710] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:02:43.063 [143/710] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:02:43.063 [144/710] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:02:43.063 [145/710] Linking static target lib/librte_cmdline.a 00:02:43.322 [146/710] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:02:43.322 [147/710] Compiling C object lib/librte_metrics.a.p/metrics_rte_metrics_telemetry.c.o 00:02:43.322 [148/710] Compiling C object lib/librte_metrics.a.p/metrics_rte_metrics.c.o 00:02:43.322 [149/710] Linking static target lib/librte_metrics.a 00:02:43.581 [150/710] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:02:43.581 [151/710] Generating lib/metrics.sym_chk with a custom command (wrapped by meson to capture output) 00:02:43.840 [152/710] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:02:43.840 [153/710] Linking static target lib/librte_timer.a 00:02:43.840 [154/710] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:02:43.840 [155/710] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:02:44.098 [156/710] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:02:44.666 [157/710] Compiling C object lib/librte_acl.a.p/acl_acl_gen.c.o 00:02:44.666 [158/710] Compiling C object lib/librte_acl.a.p/acl_rte_acl.c.o 00:02:44.666 [159/710] Compiling C object lib/librte_acl.a.p/acl_acl_run_scalar.c.o 00:02:44.666 [160/710] Compiling C object lib/librte_acl.a.p/acl_tb_mem.c.o 00:02:45.234 [161/710] Compiling C object lib/librte_acl.a.p/acl_acl_bld.c.o 00:02:45.234 [162/710] Compiling C object lib/librte_bitratestats.a.p/bitratestats_rte_bitrate.c.o 00:02:45.234 [163/710] Compiling C object lib/librte_bpf.a.p/bpf_bpf.c.o 00:02:45.234 [164/710] Linking static target lib/librte_bitratestats.a 00:02:45.234 [165/710] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:02:45.234 [166/710] Linking static target lib/librte_ethdev.a 00:02:45.493 [167/710] Compiling C object lib/librte_bbdev.a.p/bbdev_rte_bbdev.c.o 00:02:45.493 [168/710] Linking static target lib/librte_bbdev.a 00:02:45.493 [169/710] Generating lib/bitratestats.sym_chk with a custom command (wrapped by meson to capture output) 00:02:45.493 [170/710] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:02:45.493 [171/710] Linking static target lib/librte_hash.a 00:02:45.493 [172/710] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:02:45.753 [173/710] Compiling C object lib/acl/libavx2_tmp.a.p/acl_run_avx2.c.o 00:02:45.753 [174/710] Linking static target lib/acl/libavx2_tmp.a 00:02:45.753 [175/710] Linking target lib/librte_eal.so.24.0 00:02:45.753 [176/710] Generating symbol file lib/librte_eal.so.24.0.p/librte_eal.so.24.0.symbols 00:02:45.753 [177/710] Compiling C object lib/librte_bpf.a.p/bpf_bpf_dump.c.o 00:02:45.753 [178/710] Linking target lib/librte_ring.so.24.0 00:02:45.753 [179/710] Linking target lib/librte_meter.so.24.0 00:02:46.013 [180/710] Compiling C object lib/acl/libavx512_tmp.a.p/acl_run_avx512.c.o 00:02:46.013 [181/710] Linking target lib/librte_pci.so.24.0 00:02:46.013 [182/710] Generating symbol file lib/librte_ring.so.24.0.p/librte_ring.so.24.0.symbols 00:02:46.013 [183/710] Generating symbol file lib/librte_meter.so.24.0.p/librte_meter.so.24.0.symbols 00:02:46.013 [184/710] Compiling C object lib/librte_bpf.a.p/bpf_bpf_exec.c.o 00:02:46.013 [185/710] Linking target lib/librte_rcu.so.24.0 00:02:46.013 [186/710] Compiling C object lib/librte_bpf.a.p/bpf_bpf_load.c.o 00:02:46.013 [187/710] Linking target lib/librte_mempool.so.24.0 00:02:46.013 [188/710] Linking static target lib/acl/libavx512_tmp.a 00:02:46.013 [189/710] Generating lib/bbdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:46.013 [190/710] Linking target lib/librte_timer.so.24.0 00:02:46.013 [191/710] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:02:46.013 [192/710] Generating symbol file lib/librte_pci.so.24.0.p/librte_pci.so.24.0.symbols 00:02:46.272 [193/710] Generating symbol file lib/librte_rcu.so.24.0.p/librte_rcu.so.24.0.symbols 00:02:46.272 [194/710] Generating symbol file lib/librte_mempool.so.24.0.p/librte_mempool.so.24.0.symbols 00:02:46.272 [195/710] Generating symbol file lib/librte_timer.so.24.0.p/librte_timer.so.24.0.symbols 00:02:46.272 [196/710] Linking target lib/librte_mbuf.so.24.0 00:02:46.272 [197/710] Generating symbol file lib/librte_mbuf.so.24.0.p/librte_mbuf.so.24.0.symbols 00:02:46.272 [198/710] Linking target lib/librte_net.so.24.0 00:02:46.272 [199/710] Compiling C object lib/librte_acl.a.p/acl_acl_run_sse.c.o 00:02:46.272 [200/710] Linking static target lib/librte_acl.a 00:02:46.531 [201/710] Compiling C object lib/librte_cfgfile.a.p/cfgfile_rte_cfgfile.c.o 00:02:46.531 [202/710] Generating symbol file lib/librte_net.so.24.0.p/librte_net.so.24.0.symbols 00:02:46.531 [203/710] Linking target lib/librte_bbdev.so.24.0 00:02:46.531 [204/710] Linking target lib/librte_cmdline.so.24.0 00:02:46.531 [205/710] Linking static target lib/librte_cfgfile.a 00:02:46.531 [206/710] Linking target lib/librte_hash.so.24.0 00:02:46.531 [207/710] Compiling C object lib/librte_bpf.a.p/bpf_bpf_pkt.c.o 00:02:46.531 [208/710] Compiling C object lib/librte_bpf.a.p/bpf_bpf_stub.c.o 00:02:46.531 [209/710] Generating symbol file lib/librte_hash.so.24.0.p/librte_hash.so.24.0.symbols 00:02:46.790 [210/710] Generating lib/acl.sym_chk with a custom command (wrapped by meson to capture output) 00:02:46.790 [211/710] Compiling C object lib/librte_bpf.a.p/bpf_bpf_load_elf.c.o 00:02:46.790 [212/710] Linking target lib/librte_acl.so.24.0 00:02:46.790 [213/710] Generating lib/cfgfile.sym_chk with a custom command (wrapped by meson to capture output) 00:02:46.790 [214/710] Linking target lib/librte_cfgfile.so.24.0 00:02:46.790 [215/710] Compiling C object lib/librte_bpf.a.p/bpf_bpf_convert.c.o 00:02:46.790 [216/710] Generating symbol file lib/librte_acl.so.24.0.p/librte_acl.so.24.0.symbols 00:02:47.049 [217/710] Compiling C object lib/librte_bpf.a.p/bpf_bpf_validate.c.o 00:02:47.049 [218/710] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:02:47.308 [219/710] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:02:47.308 [220/710] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:02:47.308 [221/710] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:02:47.308 [222/710] Linking static target lib/librte_compressdev.a 00:02:47.308 [223/710] Compiling C object lib/librte_bpf.a.p/bpf_bpf_jit_x86.c.o 00:02:47.308 [224/710] Linking static target lib/librte_bpf.a 00:02:47.567 [225/710] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:02:47.567 [226/710] Generating lib/bpf.sym_chk with a custom command (wrapped by meson to capture output) 00:02:47.567 [227/710] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor_match_sse.c.o 00:02:47.826 [228/710] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor_single.c.o 00:02:47.826 [229/710] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:47.826 [230/710] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:02:47.826 [231/710] Linking target lib/librte_compressdev.so.24.0 00:02:47.826 [232/710] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor.c.o 00:02:47.826 [233/710] Linking static target lib/librte_distributor.a 00:02:48.085 [234/710] Generating lib/distributor.sym_chk with a custom command (wrapped by meson to capture output) 00:02:48.085 [235/710] Linking target lib/librte_distributor.so.24.0 00:02:48.085 [236/710] Compiling C object lib/librte_eventdev.a.p/eventdev_eventdev_private.c.o 00:02:48.085 [237/710] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:02:48.345 [238/710] Linking static target lib/librte_dmadev.a 00:02:48.605 [239/710] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:48.605 [240/710] Linking target lib/librte_dmadev.so.24.0 00:02:48.605 [241/710] Compiling C object lib/librte_eventdev.a.p/eventdev_eventdev_trace_points.c.o 00:02:48.605 [242/710] Generating symbol file lib/librte_dmadev.so.24.0.p/librte_dmadev.so.24.0.symbols 00:02:48.863 [243/710] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_ring.c.o 00:02:48.863 [244/710] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_dma_adapter.c.o 00:02:49.121 [245/710] Compiling C object lib/librte_efd.a.p/efd_rte_efd.c.o 00:02:49.121 [246/710] Linking static target lib/librte_efd.a 00:02:49.121 [247/710] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:02:49.121 [248/710] Linking static target lib/librte_cryptodev.a 00:02:49.121 [249/710] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_crypto_adapter.c.o 00:02:49.379 [250/710] Generating lib/efd.sym_chk with a custom command (wrapped by meson to capture output) 00:02:49.379 [251/710] Linking target lib/librte_efd.so.24.0 00:02:49.637 [252/710] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_eth_tx_adapter.c.o 00:02:49.638 [253/710] Compiling C object lib/librte_dispatcher.a.p/dispatcher_rte_dispatcher.c.o 00:02:49.638 [254/710] Linking static target lib/librte_dispatcher.a 00:02:49.638 [255/710] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:49.638 [256/710] Linking target lib/librte_ethdev.so.24.0 00:02:49.638 [257/710] Compiling C object lib/librte_gpudev.a.p/gpudev_gpudev.c.o 00:02:49.638 [258/710] Linking static target lib/librte_gpudev.a 00:02:49.897 [259/710] Generating symbol file lib/librte_ethdev.so.24.0.p/librte_ethdev.so.24.0.symbols 00:02:49.897 [260/710] Linking target lib/librte_metrics.so.24.0 00:02:49.897 [261/710] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_timer_adapter.c.o 00:02:49.897 [262/710] Linking target lib/librte_bpf.so.24.0 00:02:49.897 [263/710] Generating symbol file lib/librte_metrics.so.24.0.p/librte_metrics.so.24.0.symbols 00:02:49.897 [264/710] Generating lib/dispatcher.sym_chk with a custom command (wrapped by meson to capture output) 00:02:49.897 [265/710] Compiling C object lib/librte_gro.a.p/gro_gro_tcp4.c.o 00:02:49.897 [266/710] Linking target lib/librte_bitratestats.so.24.0 00:02:49.897 [267/710] Compiling C object lib/librte_gro.a.p/gro_rte_gro.c.o 00:02:50.156 [268/710] Generating symbol file lib/librte_bpf.so.24.0.p/librte_bpf.so.24.0.symbols 00:02:50.156 [269/710] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:50.156 [270/710] Compiling C object lib/librte_gro.a.p/gro_gro_tcp6.c.o 00:02:50.414 [271/710] Linking target lib/librte_cryptodev.so.24.0 00:02:50.414 [272/710] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_eventdev.c.o 00:02:50.414 [273/710] Generating symbol file lib/librte_cryptodev.so.24.0.p/librte_cryptodev.so.24.0.symbols 00:02:50.414 [274/710] Generating lib/gpudev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:50.414 [275/710] Linking target lib/librte_gpudev.so.24.0 00:02:50.673 [276/710] Compiling C object lib/librte_gso.a.p/gso_gso_tcp4.c.o 00:02:50.673 [277/710] Compiling C object lib/librte_gro.a.p/gro_gro_udp4.c.o 00:02:50.673 [278/710] Compiling C object lib/librte_gro.a.p/gro_gro_vxlan_udp4.c.o 00:02:50.673 [279/710] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_eth_rx_adapter.c.o 00:02:50.933 [280/710] Compiling C object lib/librte_gro.a.p/gro_gro_vxlan_tcp4.c.o 00:02:50.933 [281/710] Linking static target lib/librte_eventdev.a 00:02:50.933 [282/710] Linking static target lib/librte_gro.a 00:02:50.933 [283/710] Compiling C object lib/librte_gso.a.p/gso_gso_udp4.c.o 00:02:50.933 [284/710] Compiling C object lib/librte_gso.a.p/gso_gso_common.c.o 00:02:50.933 [285/710] Compiling C object lib/librte_gso.a.p/gso_gso_tunnel_tcp4.c.o 00:02:50.933 [286/710] Compiling C object lib/librte_gso.a.p/gso_gso_tunnel_udp4.c.o 00:02:50.933 [287/710] Generating lib/gro.sym_chk with a custom command (wrapped by meson to capture output) 00:02:51.192 [288/710] Linking target lib/librte_gro.so.24.0 00:02:51.192 [289/710] Compiling C object lib/librte_gso.a.p/gso_rte_gso.c.o 00:02:51.192 [290/710] Linking static target lib/librte_gso.a 00:02:51.192 [291/710] Generating lib/gso.sym_chk with a custom command (wrapped by meson to capture output) 00:02:51.452 [292/710] Linking target lib/librte_gso.so.24.0 00:02:51.452 [293/710] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv4_reassembly.c.o 00:02:51.452 [294/710] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv6_reassembly.c.o 00:02:51.452 [295/710] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv4_fragmentation.c.o 00:02:51.452 [296/710] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv6_fragmentation.c.o 00:02:51.452 [297/710] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ip_frag_common.c.o 00:02:51.711 [298/710] Compiling C object lib/librte_jobstats.a.p/jobstats_rte_jobstats.c.o 00:02:51.711 [299/710] Linking static target lib/librte_jobstats.a 00:02:51.711 [300/710] Compiling C object lib/librte_ip_frag.a.p/ip_frag_ip_frag_internal.c.o 00:02:51.711 [301/710] Linking static target lib/librte_ip_frag.a 00:02:51.711 [302/710] Compiling C object lib/librte_latencystats.a.p/latencystats_rte_latencystats.c.o 00:02:51.711 [303/710] Linking static target lib/librte_latencystats.a 00:02:51.970 [304/710] Generating lib/jobstats.sym_chk with a custom command (wrapped by meson to capture output) 00:02:51.970 [305/710] Linking target lib/librte_jobstats.so.24.0 00:02:51.970 [306/710] Generating lib/ip_frag.sym_chk with a custom command (wrapped by meson to capture output) 00:02:51.970 [307/710] Generating lib/latencystats.sym_chk with a custom command (wrapped by meson to capture output) 00:02:51.970 [308/710] Linking target lib/librte_ip_frag.so.24.0 00:02:51.970 [309/710] Linking target lib/librte_latencystats.so.24.0 00:02:51.970 [310/710] Compiling C object lib/member/libsketch_avx512_tmp.a.p/rte_member_sketch_avx512.c.o 00:02:51.970 [311/710] Compiling C object lib/librte_member.a.p/member_rte_member.c.o 00:02:52.228 [312/710] Linking static target lib/member/libsketch_avx512_tmp.a 00:02:52.228 [313/710] Compiling C object lib/librte_lpm.a.p/lpm_rte_lpm.c.o 00:02:52.228 [314/710] Generating symbol file lib/librte_ip_frag.so.24.0.p/librte_ip_frag.so.24.0.symbols 00:02:52.228 [315/710] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:02:52.228 [316/710] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:02:52.228 [317/710] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:02:52.487 [318/710] Compiling C object lib/librte_lpm.a.p/lpm_rte_lpm6.c.o 00:02:52.487 [319/710] Linking static target lib/librte_lpm.a 00:02:52.745 [320/710] Compiling C object lib/librte_member.a.p/member_rte_member_ht.c.o 00:02:52.745 [321/710] Generating lib/eventdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:52.746 [322/710] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:02:52.746 [323/710] Linking target lib/librte_eventdev.so.24.0 00:02:52.746 [324/710] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:02:53.004 [325/710] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:02:53.004 [326/710] Generating lib/lpm.sym_chk with a custom command (wrapped by meson to capture output) 00:02:53.004 [327/710] Generating symbol file lib/librte_eventdev.so.24.0.p/librte_eventdev.so.24.0.symbols 00:02:53.004 [328/710] Linking target lib/librte_lpm.so.24.0 00:02:53.004 [329/710] Linking target lib/librte_dispatcher.so.24.0 00:02:53.004 [330/710] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:02:53.004 [331/710] Compiling C object lib/librte_member.a.p/member_rte_member_vbf.c.o 00:02:53.004 [332/710] Compiling C object lib/librte_pcapng.a.p/pcapng_rte_pcapng.c.o 00:02:53.004 [333/710] Linking static target lib/librte_pcapng.a 00:02:53.004 [334/710] Generating symbol file lib/librte_lpm.so.24.0.p/librte_lpm.so.24.0.symbols 00:02:53.262 [335/710] Generating lib/pcapng.sym_chk with a custom command (wrapped by meson to capture output) 00:02:53.262 [336/710] Linking target lib/librte_pcapng.so.24.0 00:02:53.262 [337/710] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:02:53.262 [338/710] Generating symbol file lib/librte_pcapng.so.24.0.p/librte_pcapng.so.24.0.symbols 00:02:53.521 [339/710] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:02:53.521 [340/710] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:02:53.521 [341/710] Compiling C object lib/librte_mldev.a.p/mldev_rte_mldev_pmd.c.o 00:02:53.521 [342/710] Compiling C object lib/librte_member.a.p/member_rte_member_sketch.c.o 00:02:53.521 [343/710] Linking static target lib/librte_member.a 00:02:53.521 [344/710] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:02:53.781 [345/710] Linking static target lib/librte_power.a 00:02:53.781 [346/710] Compiling C object lib/librte_regexdev.a.p/regexdev_rte_regexdev.c.o 00:02:53.781 [347/710] Linking static target lib/librte_regexdev.a 00:02:53.781 [348/710] Compiling C object lib/librte_rawdev.a.p/rawdev_rte_rawdev.c.o 00:02:53.781 [349/710] Linking static target lib/librte_rawdev.a 00:02:53.781 [350/710] Compiling C object lib/librte_mldev.a.p/mldev_mldev_utils.c.o 00:02:53.781 [351/710] Generating lib/member.sym_chk with a custom command (wrapped by meson to capture output) 00:02:53.781 [352/710] Compiling C object lib/librte_mldev.a.p/mldev_rte_mldev.c.o 00:02:53.781 [353/710] Linking target lib/librte_member.so.24.0 00:02:54.040 [354/710] Compiling C object lib/librte_mldev.a.p/mldev_mldev_utils_scalar_bfloat16.c.o 00:02:54.040 [355/710] Compiling C object lib/librte_mldev.a.p/mldev_mldev_utils_scalar.c.o 00:02:54.040 [356/710] Compiling C object lib/librte_sched.a.p/sched_rte_approx.c.o 00:02:54.040 [357/710] Linking static target lib/librte_mldev.a 00:02:54.298 [358/710] Generating lib/rawdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:54.298 [359/710] Linking target lib/librte_rawdev.so.24.0 00:02:54.298 [360/710] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:02:54.298 [361/710] Compiling C object lib/librte_rib.a.p/rib_rte_rib.c.o 00:02:54.298 [362/710] Linking target lib/librte_power.so.24.0 00:02:54.298 [363/710] Generating lib/regexdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:54.298 [364/710] Linking target lib/librte_regexdev.so.24.0 00:02:54.557 [365/710] Compiling C object lib/librte_sched.a.p/sched_rte_red.c.o 00:02:54.557 [366/710] Compiling C object lib/librte_rib.a.p/rib_rte_rib6.c.o 00:02:54.557 [367/710] Linking static target lib/librte_rib.a 00:02:54.557 [368/710] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:02:54.557 [369/710] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:02:54.557 [370/710] Compiling C object lib/librte_sched.a.p/sched_rte_pie.c.o 00:02:54.557 [371/710] Linking static target lib/librte_reorder.a 00:02:54.815 [372/710] Compiling C object lib/librte_stack.a.p/stack_rte_stack.c.o 00:02:54.815 [373/710] Compiling C object lib/librte_stack.a.p/stack_rte_stack_std.c.o 00:02:54.815 [374/710] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:02:54.815 [375/710] Compiling C object lib/librte_stack.a.p/stack_rte_stack_lf.c.o 00:02:54.815 [376/710] Linking static target lib/librte_stack.a 00:02:54.815 [377/710] Linking target lib/librte_reorder.so.24.0 00:02:55.073 [378/710] Generating lib/rib.sym_chk with a custom command (wrapped by meson to capture output) 00:02:55.073 [379/710] Linking target lib/librte_rib.so.24.0 00:02:55.073 [380/710] Generating symbol file lib/librte_reorder.so.24.0.p/librte_reorder.so.24.0.symbols 00:02:55.073 [381/710] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:02:55.073 [382/710] Generating lib/stack.sym_chk with a custom command (wrapped by meson to capture output) 00:02:55.073 [383/710] Linking static target lib/librte_security.a 00:02:55.073 [384/710] Linking target lib/librte_stack.so.24.0 00:02:55.073 [385/710] Generating symbol file lib/librte_rib.so.24.0.p/librte_rib.so.24.0.symbols 00:02:55.336 [386/710] Generating lib/mldev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:55.336 [387/710] Linking target lib/librte_mldev.so.24.0 00:02:55.336 [388/710] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:02:55.336 [389/710] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:02:55.593 [390/710] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:02:55.593 [391/710] Linking target lib/librte_security.so.24.0 00:02:55.593 [392/710] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:02:55.593 [393/710] Generating symbol file lib/librte_security.so.24.0.p/librte_security.so.24.0.symbols 00:02:55.593 [394/710] Compiling C object lib/librte_sched.a.p/sched_rte_sched.c.o 00:02:55.593 [395/710] Linking static target lib/librte_sched.a 00:02:56.160 [396/710] Generating lib/sched.sym_chk with a custom command (wrapped by meson to capture output) 00:02:56.160 [397/710] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:02:56.160 [398/710] Linking target lib/librte_sched.so.24.0 00:02:56.160 [399/710] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:02:56.160 [400/710] Generating symbol file lib/librte_sched.so.24.0.p/librte_sched.so.24.0.symbols 00:02:56.160 [401/710] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:02:56.418 [402/710] Compiling C object lib/librte_ipsec.a.p/ipsec_sa.c.o 00:02:56.676 [403/710] Compiling C object lib/librte_ipsec.a.p/ipsec_ses.c.o 00:02:56.676 [404/710] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:02:56.676 [405/710] Compiling C object lib/librte_ipsec.a.p/ipsec_ipsec_telemetry.c.o 00:02:56.934 [406/710] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_cnt.c.o 00:02:56.934 [407/710] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_crypto.c.o 00:02:57.191 [408/710] Compiling C object lib/librte_ipsec.a.p/ipsec_esp_inb.c.o 00:02:57.191 [409/710] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_ctrl_pdu.c.o 00:02:57.191 [410/710] Compiling C object lib/librte_ipsec.a.p/ipsec_esp_outb.c.o 00:02:57.191 [411/710] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_reorder.c.o 00:02:57.191 [412/710] Compiling C object lib/librte_ipsec.a.p/ipsec_ipsec_sad.c.o 00:02:57.191 [413/710] Linking static target lib/librte_ipsec.a 00:02:57.449 [414/710] Generating lib/ipsec.sym_chk with a custom command (wrapped by meson to capture output) 00:02:57.708 [415/710] Compiling C object lib/fib/libdir24_8_avx512_tmp.a.p/dir24_8_avx512.c.o 00:02:57.708 [416/710] Linking static target lib/fib/libdir24_8_avx512_tmp.a 00:02:57.708 [417/710] Linking target lib/librte_ipsec.so.24.0 00:02:57.708 [418/710] Compiling C object lib/fib/libtrie_avx512_tmp.a.p/trie_avx512.c.o 00:02:57.708 [419/710] Linking static target lib/fib/libtrie_avx512_tmp.a 00:02:57.708 [420/710] Compiling C object lib/librte_fib.a.p/fib_rte_fib.c.o 00:02:57.708 [421/710] Compiling C object lib/librte_pdcp.a.p/pdcp_rte_pdcp.c.o 00:02:57.708 [422/710] Generating symbol file lib/librte_ipsec.so.24.0.p/librte_ipsec.so.24.0.symbols 00:02:57.965 [423/710] Compiling C object lib/librte_fib.a.p/fib_rte_fib6.c.o 00:02:58.530 [424/710] Compiling C object lib/librte_fib.a.p/fib_dir24_8.c.o 00:02:58.530 [425/710] Compiling C object lib/librte_port.a.p/port_rte_port_frag.c.o 00:02:58.530 [426/710] Compiling C object lib/librte_port.a.p/port_rte_port_ras.c.o 00:02:58.530 [427/710] Compiling C object lib/librte_port.a.p/port_rte_port_fd.c.o 00:02:58.530 [428/710] Compiling C object lib/librte_port.a.p/port_rte_port_ethdev.c.o 00:02:58.530 [429/710] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_process.c.o 00:02:58.530 [430/710] Linking static target lib/librte_pdcp.a 00:02:58.530 [431/710] Compiling C object lib/librte_fib.a.p/fib_trie.c.o 00:02:58.530 [432/710] Linking static target lib/librte_fib.a 00:02:59.096 [433/710] Generating lib/pdcp.sym_chk with a custom command (wrapped by meson to capture output) 00:02:59.096 [434/710] Linking target lib/librte_pdcp.so.24.0 00:02:59.096 [435/710] Generating lib/fib.sym_chk with a custom command (wrapped by meson to capture output) 00:02:59.096 [436/710] Linking target lib/librte_fib.so.24.0 00:02:59.096 [437/710] Compiling C object lib/librte_port.a.p/port_rte_port_sched.c.o 00:02:59.354 [438/710] Compiling C object lib/librte_port.a.p/port_rte_swx_port_ethdev.c.o 00:02:59.354 [439/710] Compiling C object lib/librte_port.a.p/port_rte_port_sym_crypto.c.o 00:02:59.612 [440/710] Compiling C object lib/librte_port.a.p/port_rte_port_source_sink.c.o 00:02:59.612 [441/710] Compiling C object lib/librte_port.a.p/port_rte_port_eventdev.c.o 00:02:59.612 [442/710] Compiling C object lib/librte_table.a.p/table_rte_swx_keycmp.c.o 00:02:59.870 [443/710] Compiling C object lib/librte_port.a.p/port_rte_swx_port_fd.c.o 00:02:59.870 [444/710] Compiling C object lib/librte_port.a.p/port_rte_swx_port_source_sink.c.o 00:03:00.129 [445/710] Compiling C object lib/librte_port.a.p/port_rte_port_ring.c.o 00:03:00.129 [446/710] Compiling C object lib/librte_port.a.p/port_rte_swx_port_ring.c.o 00:03:00.129 [447/710] Linking static target lib/librte_port.a 00:03:00.129 [448/710] Compiling C object lib/librte_table.a.p/table_rte_swx_table_learner.c.o 00:03:00.129 [449/710] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:03:00.129 [450/710] Compiling C object lib/librte_table.a.p/table_rte_swx_table_em.c.o 00:03:00.388 [451/710] Compiling C object lib/librte_table.a.p/table_rte_swx_table_selector.c.o 00:03:00.388 [452/710] Compiling C object lib/librte_table.a.p/table_rte_swx_table_wm.c.o 00:03:00.646 [453/710] Generating lib/port.sym_chk with a custom command (wrapped by meson to capture output) 00:03:00.646 [454/710] Linking target lib/librte_port.so.24.0 00:03:00.646 [455/710] Compiling C object lib/librte_pdump.a.p/pdump_rte_pdump.c.o 00:03:00.646 [456/710] Linking static target lib/librte_pdump.a 00:03:00.646 [457/710] Compiling C object lib/librte_table.a.p/table_rte_table_array.c.o 00:03:00.646 [458/710] Generating symbol file lib/librte_port.so.24.0.p/librte_port.so.24.0.symbols 00:03:00.646 [459/710] Compiling C object lib/librte_table.a.p/table_rte_table_acl.c.o 00:03:00.905 [460/710] Generating lib/pdump.sym_chk with a custom command (wrapped by meson to capture output) 00:03:00.905 [461/710] Compiling C object lib/librte_table.a.p/table_rte_table_hash_cuckoo.c.o 00:03:00.905 [462/710] Linking target lib/librte_pdump.so.24.0 00:03:01.164 [463/710] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key8.c.o 00:03:01.423 [464/710] Compiling C object lib/librte_table.a.p/table_rte_table_lpm.c.o 00:03:01.423 [465/710] Compiling C object lib/librte_table.a.p/table_rte_table_hash_ext.c.o 00:03:01.423 [466/710] Compiling C object lib/librte_table.a.p/table_rte_table_lpm_ipv6.c.o 00:03:01.423 [467/710] Compiling C object lib/librte_table.a.p/table_rte_table_stub.c.o 00:03:01.423 [468/710] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key16.c.o 00:03:01.682 [469/710] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key32.c.o 00:03:01.941 [470/710] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_pipeline.c.o 00:03:01.941 [471/710] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_port_in_action.c.o 00:03:01.941 [472/710] Compiling C object lib/librte_table.a.p/table_rte_table_hash_lru.c.o 00:03:01.941 [473/710] Linking static target lib/librte_table.a 00:03:02.508 [474/710] Compiling C object lib/librte_graph.a.p/graph_node.c.o 00:03:02.508 [475/710] Compiling C object lib/librte_graph.a.p/graph_graph_ops.c.o 00:03:02.508 [476/710] Generating lib/table.sym_chk with a custom command (wrapped by meson to capture output) 00:03:02.508 [477/710] Linking target lib/librte_table.so.24.0 00:03:02.508 [478/710] Compiling C object lib/librte_graph.a.p/graph_graph.c.o 00:03:02.508 [479/710] Generating symbol file lib/librte_table.so.24.0.p/librte_table.so.24.0.symbols 00:03:02.766 [480/710] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_ipsec.c.o 00:03:03.026 [481/710] Compiling C object lib/librte_graph.a.p/graph_graph_debug.c.o 00:03:03.026 [482/710] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_ctl.c.o 00:03:03.318 [483/710] Compiling C object lib/librte_graph.a.p/graph_rte_graph_worker.c.o 00:03:03.318 [484/710] Compiling C object lib/librte_graph.a.p/graph_graph_populate.c.o 00:03:03.318 [485/710] Compiling C object lib/librte_graph.a.p/graph_graph_stats.c.o 00:03:03.318 [486/710] Compiling C object lib/librte_graph.a.p/graph_graph_pcap.c.o 00:03:03.620 [487/710] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_pipeline_spec.c.o 00:03:03.620 [488/710] Compiling C object lib/librte_node.a.p/node_ethdev_ctrl.c.o 00:03:03.896 [489/710] Compiling C object lib/librte_graph.a.p/graph_rte_graph_model_mcore_dispatch.c.o 00:03:03.896 [490/710] Compiling C object lib/librte_node.a.p/node_ethdev_tx.c.o 00:03:03.896 [491/710] Linking static target lib/librte_graph.a 00:03:03.896 [492/710] Compiling C object lib/librte_node.a.p/node_ethdev_rx.c.o 00:03:04.166 [493/710] Compiling C object lib/librte_node.a.p/node_ip4_local.c.o 00:03:04.425 [494/710] Compiling C object lib/librte_node.a.p/node_ip4_reassembly.c.o 00:03:04.425 [495/710] Compiling C object lib/librte_node.a.p/node_ip4_lookup.c.o 00:03:04.684 [496/710] Generating lib/graph.sym_chk with a custom command (wrapped by meson to capture output) 00:03:04.684 [497/710] Linking target lib/librte_graph.so.24.0 00:03:04.684 [498/710] Generating symbol file lib/librte_graph.so.24.0.p/librte_graph.so.24.0.symbols 00:03:04.684 [499/710] Compiling C object lib/librte_node.a.p/node_null.c.o 00:03:04.943 [500/710] Compiling C object lib/librte_node.a.p/node_ip4_rewrite.c.o 00:03:04.943 [501/710] Compiling C object lib/librte_node.a.p/node_ip6_lookup.c.o 00:03:04.943 [502/710] Compiling C object lib/librte_node.a.p/node_kernel_tx.c.o 00:03:04.943 [503/710] Compiling C object lib/librte_node.a.p/node_log.c.o 00:03:05.202 [504/710] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:03:05.202 [505/710] Compiling C object lib/librte_node.a.p/node_ip6_rewrite.c.o 00:03:05.202 [506/710] Compiling C object lib/librte_node.a.p/node_kernel_rx.c.o 00:03:05.461 [507/710] Compiling C object lib/librte_node.a.p/node_pkt_drop.c.o 00:03:05.720 [508/710] Compiling C object lib/librte_node.a.p/node_pkt_cls.c.o 00:03:05.720 [509/710] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:03:05.720 [510/710] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:03:05.720 [511/710] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:03:05.720 [512/710] Compiling C object lib/librte_node.a.p/node_udp4_input.c.o 00:03:05.720 [513/710] Linking static target lib/librte_node.a 00:03:05.720 [514/710] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:03:05.979 [515/710] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:03:05.979 [516/710] Generating lib/node.sym_chk with a custom command (wrapped by meson to capture output) 00:03:05.979 [517/710] Linking target lib/librte_node.so.24.0 00:03:06.238 [518/710] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:03:06.238 [519/710] Linking static target drivers/libtmp_rte_bus_pci.a 00:03:06.238 [520/710] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:03:06.238 [521/710] Linking static target drivers/libtmp_rte_bus_vdev.a 00:03:06.497 [522/710] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:03:06.497 [523/710] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:03:06.497 [524/710] Linking static target drivers/librte_bus_pci.a 00:03:06.497 [525/710] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:03:06.497 [526/710] Compiling C object drivers/librte_bus_pci.so.24.0.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:03:06.497 [527/710] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:03:06.497 [528/710] Linking static target drivers/librte_bus_vdev.a 00:03:06.756 [529/710] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_adminq.c.o 00:03:06.756 [530/710] Compiling C object drivers/librte_bus_vdev.so.24.0.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:03:06.756 [531/710] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_diag.c.o 00:03:06.756 [532/710] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_dcb.c.o 00:03:06.756 [533/710] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:06.756 [534/710] Linking target drivers/librte_bus_vdev.so.24.0 00:03:06.756 [535/710] Generating symbol file drivers/librte_bus_vdev.so.24.0.p/librte_bus_vdev.so.24.0.symbols 00:03:07.015 [536/710] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:03:07.015 [537/710] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:03:07.015 [538/710] Linking static target drivers/libtmp_rte_mempool_ring.a 00:03:07.015 [539/710] Linking target drivers/librte_bus_pci.so.24.0 00:03:07.015 [540/710] Generating symbol file drivers/librte_bus_pci.so.24.0.p/librte_bus_pci.so.24.0.symbols 00:03:07.015 [541/710] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:03:07.015 [542/710] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:03:07.015 [543/710] Linking static target drivers/librte_mempool_ring.a 00:03:07.015 [544/710] Compiling C object drivers/librte_mempool_ring.so.24.0.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:03:07.015 [545/710] Linking target drivers/librte_mempool_ring.so.24.0 00:03:07.274 [546/710] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_hmc.c.o 00:03:07.532 [547/710] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_lan_hmc.c.o 00:03:07.791 [548/710] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_pipeline.c.o 00:03:07.791 [549/710] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_nvm.c.o 00:03:07.791 [550/710] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_common.c.o 00:03:07.791 [551/710] Linking static target drivers/net/i40e/base/libi40e_base.a 00:03:08.729 [552/710] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_pf.c.o 00:03:08.729 [553/710] Compiling C object drivers/net/i40e/libi40e_avx512_lib.a.p/i40e_rxtx_vec_avx512.c.o 00:03:08.729 [554/710] Linking static target drivers/net/i40e/libi40e_avx512_lib.a 00:03:08.729 [555/710] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_tm.c.o 00:03:08.988 [556/710] Compiling C object drivers/net/i40e/libi40e_avx2_lib.a.p/i40e_rxtx_vec_avx2.c.o 00:03:08.988 [557/710] Linking static target drivers/net/i40e/libi40e_avx2_lib.a 00:03:09.246 [558/710] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_fdir.c.o 00:03:09.246 [559/710] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_flow.c.o 00:03:09.505 [560/710] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_vf_representor.c.o 00:03:09.505 [561/710] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_hash.c.o 00:03:09.505 [562/710] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_recycle_mbufs_vec_common.c.o 00:03:10.073 [563/710] Compiling C object app/dpdk-graph.p/graph_cli.c.o 00:03:10.073 [564/710] Compiling C object app/dpdk-dumpcap.p/dumpcap_main.c.o 00:03:10.073 [565/710] Compiling C object app/dpdk-graph.p/graph_conn.c.o 00:03:10.073 [566/710] Compiling C object app/dpdk-graph.p/graph_ethdev_rx.c.o 00:03:10.332 [567/710] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:03:10.332 [568/710] Linking static target lib/librte_vhost.a 00:03:10.590 [569/710] Compiling C object app/dpdk-graph.p/graph_ip4_route.c.o 00:03:10.590 [570/710] Compiling C object app/dpdk-graph.p/graph_ethdev.c.o 00:03:10.590 [571/710] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_rte_pmd_i40e.c.o 00:03:10.849 [572/710] Compiling C object app/dpdk-graph.p/graph_ip6_route.c.o 00:03:10.849 [573/710] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_rxtx.c.o 00:03:10.849 [574/710] Compiling C object app/dpdk-graph.p/graph_graph.c.o 00:03:10.849 [575/710] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_rxtx_vec_sse.c.o 00:03:11.417 [576/710] Compiling C object app/dpdk-graph.p/graph_mempool.c.o 00:03:11.417 [577/710] Compiling C object app/dpdk-graph.p/graph_main.c.o 00:03:11.417 [578/710] Compiling C object app/dpdk-graph.p/graph_l3fwd.c.o 00:03:11.417 [579/710] Compiling C object app/dpdk-graph.p/graph_utils.c.o 00:03:11.417 [580/710] Compiling C object app/dpdk-graph.p/graph_neigh.c.o 00:03:11.417 [581/710] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:03:11.417 [582/710] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_ethdev.c.o 00:03:11.417 [583/710] Linking static target drivers/libtmp_rte_net_i40e.a 00:03:11.417 [584/710] Linking target lib/librte_vhost.so.24.0 00:03:11.676 [585/710] Compiling C object app/dpdk-test-cmdline.p/test-cmdline_commands.c.o 00:03:11.676 [586/710] Compiling C object app/dpdk-test-cmdline.p/test-cmdline_cmdline_test.c.o 00:03:11.934 [587/710] Generating drivers/rte_net_i40e.pmd.c with a custom command 00:03:11.935 [588/710] Compiling C object drivers/librte_net_i40e.a.p/meson-generated_.._rte_net_i40e.pmd.c.o 00:03:11.935 [589/710] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_main.c.o 00:03:11.935 [590/710] Compiling C object drivers/librte_net_i40e.so.24.0.p/meson-generated_.._rte_net_i40e.pmd.c.o 00:03:11.935 [591/710] Linking static target drivers/librte_net_i40e.a 00:03:11.935 [592/710] Compiling C object app/dpdk-pdump.p/pdump_main.c.o 00:03:11.935 [593/710] Compiling C object app/dpdk-proc-info.p/proc-info_main.c.o 00:03:12.193 [594/710] Compiling C object app/dpdk-test-acl.p/test-acl_main.c.o 00:03:12.193 [595/710] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_options_parse.c.o 00:03:12.451 [596/710] Generating drivers/rte_net_i40e.sym_chk with a custom command (wrapped by meson to capture output) 00:03:12.451 [597/710] Linking target drivers/librte_net_i40e.so.24.0 00:03:12.451 [598/710] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev.c.o 00:03:12.708 [599/710] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev_vector.c.o 00:03:12.967 [600/710] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_throughput.c.o 00:03:12.967 [601/710] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_main.c.o 00:03:12.967 [602/710] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_verify.c.o 00:03:13.226 [603/710] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_common.c.o 00:03:13.226 [604/710] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_cyclecount.c.o 00:03:13.226 [605/710] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_common.c.o 00:03:13.484 [606/710] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_ops.c.o 00:03:13.485 [607/710] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_options_parsing.c.o 00:03:13.742 [608/710] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_vectors.c.o 00:03:13.742 [609/710] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_latency.c.o 00:03:14.001 [610/710] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_vector_parsing.c.o 00:03:14.001 [611/710] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_pmd_cyclecount.c.o 00:03:14.001 [612/710] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_test.c.o 00:03:14.001 [613/710] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_throughput.c.o 00:03:14.001 [614/710] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_parser.c.o 00:03:14.259 [615/710] Compiling C object app/dpdk-test-dma-perf.p/test-dma-perf_main.c.o 00:03:14.259 [616/710] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_main.c.o 00:03:14.259 [617/710] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_verify.c.o 00:03:14.517 [618/710] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_main.c.o 00:03:14.517 [619/710] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_options.c.o 00:03:14.775 [620/710] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_atq.c.o 00:03:14.775 [621/710] Compiling C object app/dpdk-test-dma-perf.p/test-dma-perf_benchmark.c.o 00:03:14.775 [622/710] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_common.c.o 00:03:15.034 [623/710] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_queue.c.o 00:03:15.601 [624/710] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_table_action.c.o 00:03:15.601 [625/710] Linking static target lib/librte_pipeline.a 00:03:15.601 [626/710] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_atq.c.o 00:03:15.860 [627/710] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_atq.c.o 00:03:15.860 [628/710] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_actions_gen.c.o 00:03:15.860 [629/710] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_common.c.o 00:03:15.860 [630/710] Compiling C object app/dpdk-test-fib.p/test-fib_main.c.o 00:03:16.118 [631/710] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_queue.c.o 00:03:16.118 [632/710] Linking target app/dpdk-dumpcap 00:03:16.118 [633/710] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_queue.c.o 00:03:16.118 [634/710] Linking target app/dpdk-graph 00:03:16.376 [635/710] Linking target app/dpdk-pdump 00:03:16.376 [636/710] Linking target app/dpdk-proc-info 00:03:16.376 [637/710] Linking target app/dpdk-test-acl 00:03:16.376 [638/710] Linking target app/dpdk-test-cmdline 00:03:16.376 [639/710] Linking target app/dpdk-test-compress-perf 00:03:16.634 [640/710] Linking target app/dpdk-test-crypto-perf 00:03:16.634 [641/710] Linking target app/dpdk-test-dma-perf 00:03:16.634 [642/710] Linking target app/dpdk-test-fib 00:03:16.892 [643/710] Compiling C object app/dpdk-test-mldev.p/test-mldev_ml_test.c.o 00:03:16.892 [644/710] Compiling C object app/dpdk-test-mldev.p/test-mldev_parser.c.o 00:03:16.892 [645/710] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_flow_gen.c.o 00:03:17.151 [646/710] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_items_gen.c.o 00:03:17.151 [647/710] Compiling C object app/dpdk-test-gpudev.p/test-gpudev_main.c.o 00:03:17.151 [648/710] Compiling C object app/dpdk-test-mldev.p/test-mldev_ml_main.c.o 00:03:17.151 [649/710] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_common.c.o 00:03:17.409 [650/710] Compiling C object app/dpdk-test-mldev.p/test-mldev_ml_options.c.o 00:03:17.410 [651/710] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_device_ops.c.o 00:03:17.668 [652/710] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_model_common.c.o 00:03:17.668 [653/710] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_common.c.o 00:03:17.668 [654/710] Linking target app/dpdk-test-gpudev 00:03:17.668 [655/710] Linking target app/dpdk-test-eventdev 00:03:17.668 [656/710] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_main.c.o 00:03:17.668 [657/710] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_model_ops.c.o 00:03:17.926 [658/710] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev_perf.c.o 00:03:18.184 [659/710] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_inference_ordered.c.o 00:03:18.184 [660/710] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_inference_interleave.c.o 00:03:18.184 [661/710] Linking target app/dpdk-test-flow-perf 00:03:18.184 [662/710] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_stats.c.o 00:03:18.184 [663/710] Generating lib/pipeline.sym_chk with a custom command (wrapped by meson to capture output) 00:03:18.184 [664/710] Linking target lib/librte_pipeline.so.24.0 00:03:18.184 [665/710] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_config.c.o 00:03:18.443 [666/710] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_init.c.o 00:03:18.443 [667/710] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_main.c.o 00:03:18.443 [668/710] Linking target app/dpdk-test-bbdev 00:03:18.701 [669/710] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_acl.c.o 00:03:18.701 [670/710] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_lpm.c.o 00:03:18.701 [671/710] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_lpm_ipv6.c.o 00:03:18.958 [672/710] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_hash.c.o 00:03:18.958 [673/710] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_stub.c.o 00:03:18.958 [674/710] Compiling C object app/dpdk-testpmd.p/test-pmd_5tswap.c.o 00:03:19.217 [675/710] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_cman.c.o 00:03:19.217 [676/710] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_runtime.c.o 00:03:19.217 [677/710] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_inference_common.c.o 00:03:19.475 [678/710] Compiling C object app/dpdk-testpmd.p/test-pmd_cmd_flex_item.c.o 00:03:19.733 [679/710] Linking target app/dpdk-test-mldev 00:03:19.733 [680/710] Linking target app/dpdk-test-pipeline 00:03:19.733 [681/710] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_mtr.c.o 00:03:19.733 [682/710] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_tm.c.o 00:03:19.733 [683/710] Compiling C object app/dpdk-testpmd.p/test-pmd_flowgen.c.o 00:03:20.299 [684/710] Compiling C object app/dpdk-testpmd.p/test-pmd_icmpecho.c.o 00:03:20.299 [685/710] Compiling C object app/dpdk-testpmd.p/test-pmd_iofwd.c.o 00:03:20.299 [686/710] Compiling C object app/dpdk-testpmd.p/test-pmd_macfwd.c.o 00:03:20.299 [687/710] Compiling C object app/dpdk-testpmd.p/test-pmd_ieee1588fwd.c.o 00:03:20.557 [688/710] Compiling C object app/dpdk-testpmd.p/test-pmd_macswap.c.o 00:03:20.557 [689/710] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline.c.o 00:03:20.557 [690/710] Compiling C object app/dpdk-testpmd.p/test-pmd_csumonly.c.o 00:03:20.816 [691/710] Compiling C object app/dpdk-testpmd.p/test-pmd_rxonly.c.o 00:03:20.816 [692/710] Compiling C object app/dpdk-testpmd.p/test-pmd_recycle_mbufs.c.o 00:03:21.102 [693/710] Compiling C object app/dpdk-testpmd.p/test-pmd_shared_rxq_fwd.c.o 00:03:21.360 [694/710] Compiling C object app/dpdk-testpmd.p/test-pmd_parameters.c.o 00:03:21.617 [695/710] Compiling C object app/dpdk-testpmd.p/test-pmd_util.c.o 00:03:21.617 [696/710] Compiling C object app/dpdk-testpmd.p/test-pmd_bpf_cmd.c.o 00:03:21.875 [697/710] Compiling C object app/dpdk-testpmd.p/test-pmd_config.c.o 00:03:21.875 [698/710] Compiling C object app/dpdk-testpmd.p/.._drivers_net_i40e_i40e_testpmd.c.o 00:03:21.875 [699/710] Compiling C object app/dpdk-testpmd.p/test-pmd_noisy_vnf.c.o 00:03:22.144 [700/710] Compiling C object app/dpdk-test-regex.p/test-regex_main.c.o 00:03:22.144 [701/710] Compiling C object app/dpdk-testpmd.p/test-pmd_txonly.c.o 00:03:22.144 [702/710] Compiling C object app/dpdk-test-sad.p/test-sad_main.c.o 00:03:22.144 [703/710] Compiling C object app/dpdk-test-security-perf.p/test-security-perf_test_security_perf.c.o 00:03:22.145 [704/710] Compiling C object app/dpdk-testpmd.p/test-pmd_testpmd.c.o 00:03:22.404 [705/710] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_flow.c.o 00:03:22.404 [706/710] Linking target app/dpdk-test-regex 00:03:22.404 [707/710] Linking target app/dpdk-test-sad 00:03:22.663 [708/710] Compiling C object app/dpdk-test-security-perf.p/test_test_cryptodev_security_ipsec.c.o 00:03:22.663 [709/710] Linking target app/dpdk-testpmd 00:03:23.228 [710/710] Linking target app/dpdk-test-security-perf 00:03:23.228 14:43:56 -- common/autobuild_common.sh@190 -- $ ninja -C /home/vagrant/spdk_repo/dpdk/build-tmp -j10 install 00:03:23.228 ninja: Entering directory `/home/vagrant/spdk_repo/dpdk/build-tmp' 00:03:23.228 [0/1] Installing files. 00:03:23.490 Installing subdir /home/vagrant/spdk_repo/dpdk/examples to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples 00:03:23.490 Installing /home/vagrant/spdk_repo/dpdk/examples/bbdev_app/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bbdev_app 00:03:23.490 Installing /home/vagrant/spdk_repo/dpdk/examples/bbdev_app/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bbdev_app 00:03:23.490 Installing /home/vagrant/spdk_repo/dpdk/examples/bond/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bond 00:03:23.490 Installing /home/vagrant/spdk_repo/dpdk/examples/bond/commands.list to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bond 00:03:23.490 Installing /home/vagrant/spdk_repo/dpdk/examples/bond/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bond 00:03:23.490 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/README to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:03:23.490 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/dummy.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:03:23.490 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/t1.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:03:23.490 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/t2.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:03:23.490 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/t3.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:03:23.490 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:03:23.490 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/commands.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:03:23.490 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/commands.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:03:23.490 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:03:23.491 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/parse_obj_list.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:03:23.491 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/parse_obj_list.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:03:23.491 Installing /home/vagrant/spdk_repo/dpdk/examples/common/pkt_group.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/common 00:03:23.491 Installing /home/vagrant/spdk_repo/dpdk/examples/common/altivec/port_group.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/common/altivec 00:03:23.491 Installing /home/vagrant/spdk_repo/dpdk/examples/common/neon/port_group.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/common/neon 00:03:23.491 Installing /home/vagrant/spdk_repo/dpdk/examples/common/sse/port_group.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/common/sse 00:03:23.491 Installing /home/vagrant/spdk_repo/dpdk/examples/distributor/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/distributor 00:03:23.491 Installing /home/vagrant/spdk_repo/dpdk/examples/distributor/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/distributor 00:03:23.491 Installing /home/vagrant/spdk_repo/dpdk/examples/dma/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/dma 00:03:23.491 Installing /home/vagrant/spdk_repo/dpdk/examples/dma/dmafwd.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/dma 00:03:23.491 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool 00:03:23.491 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/ethtool-app/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:03:23.491 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/ethtool-app/ethapp.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:03:23.491 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/ethtool-app/ethapp.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:03:23.491 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/ethtool-app/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:03:23.491 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/lib/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/lib 00:03:23.491 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/lib/rte_ethtool.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/lib 00:03:23.491 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/lib/rte_ethtool.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/lib 00:03:23.491 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:03:23.491 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:03:23.491 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/pipeline_common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:03:23.491 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/pipeline_worker_generic.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:03:23.491 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/pipeline_worker_tx.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:03:23.491 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:23.491 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_dev_self_test.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:23.491 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_dev_self_test.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:23.491 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:23.491 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:23.491 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_aes.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:23.491 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_ccm.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:23.491 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_cmac.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:23.491 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_ecdsa.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:23.491 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_gcm.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:23.491 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_hmac.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:23.491 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_rsa.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:23.491 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_sha.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:23.491 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_tdes.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:23.491 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_xts.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:23.491 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:23.491 Installing /home/vagrant/spdk_repo/dpdk/examples/flow_filtering/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_filtering 00:03:23.491 Installing /home/vagrant/spdk_repo/dpdk/examples/flow_filtering/flow_blocks.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_filtering 00:03:23.491 Installing /home/vagrant/spdk_repo/dpdk/examples/flow_filtering/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_filtering 00:03:23.491 Installing /home/vagrant/spdk_repo/dpdk/examples/helloworld/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/helloworld 00:03:23.491 Installing /home/vagrant/spdk_repo/dpdk/examples/helloworld/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/helloworld 00:03:23.491 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_fragmentation/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_fragmentation 00:03:23.491 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_fragmentation/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_fragmentation 00:03:23.491 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:23.491 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/action.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:23.491 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/action.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:23.491 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/cli.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:23.491 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/cli.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:23.491 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:23.491 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/conn.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:23.491 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/conn.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:23.491 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/cryptodev.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:23.491 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/cryptodev.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:23.491 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/link.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:23.491 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/link.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:23.491 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:23.491 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/mempool.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:23.491 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/mempool.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:23.491 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/parser.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:23.491 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/parser.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:23.491 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/pipeline.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:23.491 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/pipeline.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:23.491 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/swq.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:23.491 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/swq.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:23.491 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/tap.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:23.491 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/tap.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:23.491 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/thread.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:23.491 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/thread.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:23.491 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/tmgr.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:23.491 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/tmgr.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:23.491 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/firewall.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:23.491 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/flow.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:23.491 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/flow_crypto.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:23.491 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/l2fwd.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:23.491 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/route.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:23.491 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/route_ecmp.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:23.491 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/rss.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:23.491 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/tap.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:23.491 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_reassembly/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_reassembly 00:03:23.491 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_reassembly/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_reassembly 00:03:23.491 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:23.491 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ep0.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:23.491 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ep1.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:23.492 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/esp.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:23.492 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/esp.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:23.492 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/event_helper.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:23.492 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/event_helper.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:23.492 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/flow.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:23.492 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/flow.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:23.492 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipip.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:23.492 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec-secgw.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:23.492 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec-secgw.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:23.492 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:23.492 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:23.492 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_lpm_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:23.492 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:23.492 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_process.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:23.492 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_worker.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:23.492 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_worker.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:23.492 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/parser.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:23.492 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/parser.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:23.492 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/rt.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:23.492 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sa.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:23.492 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sad.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:23.492 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sad.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:23.492 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sp4.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:23.492 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sp6.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:23.492 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/bypass_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:23.492 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:23.492 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/common_defs_secgw.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:23.492 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/data_rxtx.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:23.492 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/linux_test.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:23.492 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/load_env.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:23.492 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/pkttest.py to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:23.492 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/pkttest.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:23.492 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/run_test.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:23.492 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_3descbc_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:23.492 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_3descbc_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:23.492 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aescbc_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:23.492 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aescbc_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:23.492 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aesctr_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:23.492 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aesctr_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:23.492 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aesgcm_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:23.492 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aesgcm_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:23.492 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_ipv6opts.py to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:23.492 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_3descbc_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:23.492 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_3descbc_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:23.492 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aescbc_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:23.492 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aescbc_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:23.492 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aesctr_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:23.492 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aesctr_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:23.492 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aesgcm_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:23.492 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aesgcm_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:23.492 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_null_header_reconstruct.py to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:23.492 Installing /home/vagrant/spdk_repo/dpdk/examples/ipv4_multicast/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipv4_multicast 00:03:23.492 Installing /home/vagrant/spdk_repo/dpdk/examples/ipv4_multicast/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipv4_multicast 00:03:23.492 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-cat/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-cat 00:03:23.492 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-cat/cat.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-cat 00:03:23.492 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-cat/cat.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-cat 00:03:23.492 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-cat/l2fwd-cat.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-cat 00:03:23.492 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-crypto/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-crypto 00:03:23.492 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-crypto/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-crypto 00:03:23.492 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:23.492 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_common.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:23.492 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:23.492 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_event.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:23.492 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_event.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:23.492 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_event_generic.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:23.492 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_event_internal_port.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:23.492 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_poll.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:23.492 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_poll.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:23.492 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:23.492 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-jobstats/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-jobstats 00:03:23.492 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-jobstats/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-jobstats 00:03:23.492 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:03:23.492 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:03:23.492 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/shm.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:03:23.492 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/shm.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:03:23.492 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/ka-agent/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive/ka-agent 00:03:23.492 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/ka-agent/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive/ka-agent 00:03:23.492 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-macsec/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-macsec 00:03:23.492 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-macsec/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-macsec 00:03:23.492 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd 00:03:23.492 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd 00:03:23.492 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-graph/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-graph 00:03:23.492 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-graph/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-graph 00:03:23.492 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:03:23.492 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:03:23.493 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/main.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:03:23.493 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/perf_core.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:03:23.493 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/perf_core.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:03:23.493 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:23.493 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/em_default_v4.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:23.493 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/em_default_v6.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:23.493 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/em_route_parse.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:23.493 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:23.493 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_acl.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:23.493 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_acl.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:23.493 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_acl_scalar.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:23.493 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_altivec.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:23.493 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:23.493 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:23.493 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:23.493 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em_hlm.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:23.493 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em_hlm_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:23.493 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em_hlm_sse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:23.493 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em_sequential.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:23.493 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_event.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:23.493 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_event.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:23.493 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_event_generic.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:23.493 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_event_internal_port.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:23.493 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_fib.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:23.493 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:23.493 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:23.493 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm_altivec.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:23.493 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:23.493 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm_sse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:23.493 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:23.493 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_route.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:23.493 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_sse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:23.493 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/lpm_default_v4.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:23.493 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/lpm_default_v6.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:23.493 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/lpm_route_parse.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:23.493 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:23.493 Installing /home/vagrant/spdk_repo/dpdk/examples/link_status_interrupt/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/link_status_interrupt 00:03:23.493 Installing /home/vagrant/spdk_repo/dpdk/examples/link_status_interrupt/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/link_status_interrupt 00:03:23.493 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process 00:03:23.493 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp 00:03:23.493 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_client/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_client 00:03:23.493 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_client/client.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_client 00:03:23.493 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:23.493 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/args.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:23.493 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/args.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:23.493 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/init.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:23.493 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/init.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:23.493 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:23.493 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/shared/common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/shared 00:03:23.493 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/hotplug_mp/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:03:23.493 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/hotplug_mp/commands.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:03:23.493 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/hotplug_mp/commands.list to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:03:23.493 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/hotplug_mp/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:03:23.493 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/simple_mp/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:03:23.493 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/simple_mp/commands.list to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:03:23.493 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/simple_mp/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:03:23.493 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/simple_mp/mp_commands.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:03:23.493 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/simple_mp/mp_commands.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:03:23.493 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/symmetric_mp/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/symmetric_mp 00:03:23.493 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/symmetric_mp/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/symmetric_mp 00:03:23.493 Installing /home/vagrant/spdk_repo/dpdk/examples/ntb/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ntb 00:03:23.493 Installing /home/vagrant/spdk_repo/dpdk/examples/ntb/commands.list to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ntb 00:03:23.493 Installing /home/vagrant/spdk_repo/dpdk/examples/ntb/ntb_fwd.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ntb 00:03:23.493 Installing /home/vagrant/spdk_repo/dpdk/examples/packet_ordering/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/packet_ordering 00:03:23.493 Installing /home/vagrant/spdk_repo/dpdk/examples/packet_ordering/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/packet_ordering 00:03:23.493 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:23.493 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/cli.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:23.493 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/cli.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:23.493 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/conn.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:23.493 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/conn.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:23.493 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:23.493 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/obj.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:23.493 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/obj.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:23.493 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/thread.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:23.493 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/thread.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:23.493 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/ethdev.io to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:23.493 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:23.493 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:23.493 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib_nexthop_group_table.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:23.493 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib_nexthop_table.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:23.493 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib_routing_table.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:23.493 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/hash_func.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:23.493 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/hash_func.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:23.493 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/ipsec.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:23.493 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/ipsec.io to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:23.494 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/ipsec.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:23.494 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/ipsec_sa.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:23.494 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:23.494 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:23.494 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd_macswp.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:23.494 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd_macswp.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:23.494 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd_macswp_pcap.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:23.494 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd_pcap.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:23.494 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/learner.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:23.494 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/learner.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:23.494 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/meter.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:23.494 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/meter.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:23.494 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/mirroring.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:23.494 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/mirroring.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:23.494 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/packet.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:23.494 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/pcap.io to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:23.494 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/recirculation.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:23.494 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/recirculation.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:23.494 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/registers.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:23.494 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/registers.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:23.494 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/rss.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:23.494 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/rss.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:23.494 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/selector.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:23.494 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/selector.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:23.494 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/selector.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:23.494 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/varbit.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:23.494 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/varbit.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:23.494 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:23.494 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:23.494 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan_pcap.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:23.494 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan_table.py to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:23.494 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan_table.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:23.494 Installing /home/vagrant/spdk_repo/dpdk/examples/ptpclient/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ptpclient 00:03:23.494 Installing /home/vagrant/spdk_repo/dpdk/examples/ptpclient/ptpclient.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ptpclient 00:03:23.494 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:03:23.494 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:03:23.494 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/main.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:03:23.494 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/rte_policer.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:03:23.494 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/rte_policer.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:03:23.494 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:23.494 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/app_thread.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:23.494 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/args.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:23.494 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/cfg_file.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:23.494 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/cfg_file.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:23.494 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/cmdline.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:23.494 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/init.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:23.494 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:23.494 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/main.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:23.494 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/profile.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:23.494 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/profile_ov.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:23.494 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/profile_pie.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:23.494 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/profile_red.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:23.494 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/stats.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:23.494 Installing /home/vagrant/spdk_repo/dpdk/examples/rxtx_callbacks/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/rxtx_callbacks 00:03:23.494 Installing /home/vagrant/spdk_repo/dpdk/examples/rxtx_callbacks/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/rxtx_callbacks 00:03:23.494 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd 00:03:23.494 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/efd_node/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/efd_node 00:03:23.494 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/efd_node/node.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/efd_node 00:03:23.494 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/efd_server/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:03:23.494 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/efd_server/args.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:03:23.494 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/efd_server/args.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:03:23.494 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/efd_server/init.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:03:23.494 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/efd_server/init.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:03:23.494 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/efd_server/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:03:23.494 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/shared/common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/shared 00:03:23.494 Installing /home/vagrant/spdk_repo/dpdk/examples/service_cores/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/service_cores 00:03:23.494 Installing /home/vagrant/spdk_repo/dpdk/examples/service_cores/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/service_cores 00:03:23.494 Installing /home/vagrant/spdk_repo/dpdk/examples/skeleton/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/skeleton 00:03:23.494 Installing /home/vagrant/spdk_repo/dpdk/examples/skeleton/basicfwd.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/skeleton 00:03:23.494 Installing /home/vagrant/spdk_repo/dpdk/examples/timer/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/timer 00:03:23.494 Installing /home/vagrant/spdk_repo/dpdk/examples/timer/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/timer 00:03:23.494 Installing /home/vagrant/spdk_repo/dpdk/examples/vdpa/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vdpa 00:03:23.494 Installing /home/vagrant/spdk_repo/dpdk/examples/vdpa/commands.list to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vdpa 00:03:23.494 Installing /home/vagrant/spdk_repo/dpdk/examples/vdpa/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vdpa 00:03:23.494 Installing /home/vagrant/spdk_repo/dpdk/examples/vdpa/vdpa_blk_compact.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vdpa 00:03:23.494 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost 00:03:23.494 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost 00:03:23.494 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost/main.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost 00:03:23.494 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost/virtio_net.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost 00:03:23.494 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:03:23.494 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/blk.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:03:23.494 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/blk_spec.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:03:23.494 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/vhost_blk.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:03:23.494 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/vhost_blk.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:03:23.494 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/vhost_blk_compat.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:03:23.494 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_crypto/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_crypto 00:03:23.495 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_crypto/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_crypto 00:03:23.495 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:23.495 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/channel_manager.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:23.495 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/channel_manager.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:23.495 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/channel_monitor.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:23.495 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/channel_monitor.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:23.495 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:23.495 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/oob_monitor.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:23.495 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/oob_monitor_nop.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:23.495 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/oob_monitor_x86.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:23.495 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/parse.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:23.495 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/parse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:23.495 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/power_manager.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:23.495 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/power_manager.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:23.495 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/vm_power_cli.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:23.495 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/vm_power_cli.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:23.495 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:23.495 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:23.495 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/parse.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:23.495 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/parse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:23.495 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/vm_power_cli_guest.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:23.495 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/vm_power_cli_guest.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:23.495 Installing /home/vagrant/spdk_repo/dpdk/examples/vmdq/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vmdq 00:03:23.495 Installing /home/vagrant/spdk_repo/dpdk/examples/vmdq/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vmdq 00:03:23.495 Installing /home/vagrant/spdk_repo/dpdk/examples/vmdq_dcb/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vmdq_dcb 00:03:23.495 Installing /home/vagrant/spdk_repo/dpdk/examples/vmdq_dcb/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vmdq_dcb 00:03:23.495 Installing lib/librte_log.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:23.495 Installing lib/librte_log.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:23.755 Installing lib/librte_kvargs.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:23.755 Installing lib/librte_kvargs.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:23.755 Installing lib/librte_telemetry.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:23.755 Installing lib/librte_telemetry.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:23.755 Installing lib/librte_eal.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:23.755 Installing lib/librte_eal.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:23.755 Installing lib/librte_ring.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:23.755 Installing lib/librte_ring.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:23.755 Installing lib/librte_rcu.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:23.755 Installing lib/librte_rcu.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:23.755 Installing lib/librte_mempool.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:23.755 Installing lib/librte_mempool.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:23.755 Installing lib/librte_mbuf.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:23.755 Installing lib/librte_mbuf.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:23.755 Installing lib/librte_net.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:23.755 Installing lib/librte_net.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:23.755 Installing lib/librte_meter.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:23.755 Installing lib/librte_meter.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:23.755 Installing lib/librte_ethdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:23.755 Installing lib/librte_ethdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:23.755 Installing lib/librte_pci.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:23.755 Installing lib/librte_pci.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:23.755 Installing lib/librte_cmdline.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:23.755 Installing lib/librte_cmdline.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:23.755 Installing lib/librte_metrics.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:23.755 Installing lib/librte_metrics.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:23.755 Installing lib/librte_hash.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:23.755 Installing lib/librte_hash.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:23.755 Installing lib/librte_timer.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:23.755 Installing lib/librte_timer.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:23.755 Installing lib/librte_acl.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:23.755 Installing lib/librte_acl.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:23.755 Installing lib/librte_bbdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:23.755 Installing lib/librte_bbdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:23.755 Installing lib/librte_bitratestats.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:23.755 Installing lib/librte_bitratestats.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:23.755 Installing lib/librte_bpf.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:23.755 Installing lib/librte_bpf.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:23.755 Installing lib/librte_cfgfile.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:23.755 Installing lib/librte_cfgfile.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:23.755 Installing lib/librte_compressdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:23.755 Installing lib/librte_compressdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:23.755 Installing lib/librte_cryptodev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:23.755 Installing lib/librte_cryptodev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:23.755 Installing lib/librte_distributor.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:23.756 Installing lib/librte_distributor.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:23.756 Installing lib/librte_dmadev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:23.756 Installing lib/librte_dmadev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:23.756 Installing lib/librte_efd.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:23.756 Installing lib/librte_efd.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:23.756 Installing lib/librte_eventdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:23.756 Installing lib/librte_eventdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:23.756 Installing lib/librte_dispatcher.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:23.756 Installing lib/librte_dispatcher.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:23.756 Installing lib/librte_gpudev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:23.756 Installing lib/librte_gpudev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:23.756 Installing lib/librte_gro.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:23.756 Installing lib/librte_gro.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:23.756 Installing lib/librte_gso.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:23.756 Installing lib/librte_gso.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:23.756 Installing lib/librte_ip_frag.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:23.756 Installing lib/librte_ip_frag.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:23.756 Installing lib/librte_jobstats.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:23.756 Installing lib/librte_jobstats.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:23.756 Installing lib/librte_latencystats.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:23.756 Installing lib/librte_latencystats.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:23.756 Installing lib/librte_lpm.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:23.756 Installing lib/librte_lpm.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:23.756 Installing lib/librte_member.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:23.756 Installing lib/librte_member.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:23.756 Installing lib/librte_pcapng.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:23.756 Installing lib/librte_pcapng.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:23.756 Installing lib/librte_power.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:23.756 Installing lib/librte_power.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:23.756 Installing lib/librte_rawdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:23.756 Installing lib/librte_rawdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:23.756 Installing lib/librte_regexdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:23.756 Installing lib/librte_regexdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:23.756 Installing lib/librte_mldev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:23.756 Installing lib/librte_mldev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:23.756 Installing lib/librte_rib.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:23.756 Installing lib/librte_rib.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:23.756 Installing lib/librte_reorder.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:23.756 Installing lib/librte_reorder.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:23.756 Installing lib/librte_sched.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:23.756 Installing lib/librte_sched.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:23.756 Installing lib/librte_security.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:23.756 Installing lib/librte_security.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:23.756 Installing lib/librte_stack.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:23.756 Installing lib/librte_stack.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:23.756 Installing lib/librte_vhost.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:23.756 Installing lib/librte_vhost.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:23.756 Installing lib/librte_ipsec.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:23.756 Installing lib/librte_ipsec.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:23.756 Installing lib/librte_pdcp.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:23.756 Installing lib/librte_pdcp.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:23.756 Installing lib/librte_fib.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:23.756 Installing lib/librte_fib.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:23.756 Installing lib/librte_port.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:23.756 Installing lib/librte_port.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:23.756 Installing lib/librte_pdump.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:23.756 Installing lib/librte_pdump.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:23.756 Installing lib/librte_table.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:23.756 Installing lib/librte_table.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:23.756 Installing lib/librte_pipeline.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:23.756 Installing lib/librte_pipeline.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:23.756 Installing lib/librte_graph.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:23.756 Installing lib/librte_graph.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:24.018 Installing lib/librte_node.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:24.018 Installing lib/librte_node.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:24.018 Installing drivers/librte_bus_pci.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:24.018 Installing drivers/librte_bus_pci.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0 00:03:24.018 Installing drivers/librte_bus_vdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:24.018 Installing drivers/librte_bus_vdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0 00:03:24.019 Installing drivers/librte_mempool_ring.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:24.019 Installing drivers/librte_mempool_ring.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0 00:03:24.019 Installing drivers/librte_net_i40e.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:24.019 Installing drivers/librte_net_i40e.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0 00:03:24.019 Installing app/dpdk-dumpcap to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:24.019 Installing app/dpdk-graph to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:24.019 Installing app/dpdk-pdump to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:24.019 Installing app/dpdk-proc-info to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:24.019 Installing app/dpdk-test-acl to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:24.019 Installing app/dpdk-test-bbdev to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:24.019 Installing app/dpdk-test-cmdline to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:24.019 Installing app/dpdk-test-compress-perf to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:24.019 Installing app/dpdk-test-crypto-perf to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:24.019 Installing app/dpdk-test-dma-perf to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:24.019 Installing app/dpdk-test-eventdev to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:24.019 Installing app/dpdk-test-fib to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:24.019 Installing app/dpdk-test-flow-perf to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:24.019 Installing app/dpdk-test-gpudev to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:24.019 Installing app/dpdk-test-mldev to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:24.019 Installing app/dpdk-test-pipeline to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:24.019 Installing app/dpdk-testpmd to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:24.019 Installing app/dpdk-test-regex to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:24.019 Installing app/dpdk-test-sad to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:24.019 Installing app/dpdk-test-security-perf to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:24.019 Installing /home/vagrant/spdk_repo/dpdk/config/rte_config.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:24.019 Installing /home/vagrant/spdk_repo/dpdk/lib/log/rte_log.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:24.019 Installing /home/vagrant/spdk_repo/dpdk/lib/kvargs/rte_kvargs.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:24.019 Installing /home/vagrant/spdk_repo/dpdk/lib/telemetry/rte_telemetry.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:24.019 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_atomic.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:24.019 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_byteorder.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:24.019 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_cpuflags.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:24.019 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_cycles.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:24.019 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_io.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:24.019 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_memcpy.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:24.019 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_pause.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:24.019 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_power_intrinsics.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:24.019 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_prefetch.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:24.019 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_rwlock.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:24.019 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_spinlock.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:24.019 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_vect.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:24.019 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_atomic.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:24.019 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_byteorder.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:24.019 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_cpuflags.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:24.019 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_cycles.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:24.019 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_io.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:24.019 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_memcpy.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:24.019 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_pause.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:24.019 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_power_intrinsics.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:24.019 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_prefetch.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:24.019 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_rtm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:24.019 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_rwlock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:24.019 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_spinlock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:24.019 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_vect.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:24.019 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_atomic_32.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:24.019 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_atomic_64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:24.019 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_byteorder_32.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:24.019 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_byteorder_64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:24.019 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_alarm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:24.019 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_bitmap.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:24.019 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_bitops.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:24.019 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_branch_prediction.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:24.019 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_bus.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:24.019 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_class.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:24.019 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_common.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:24.019 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_compat.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:24.019 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_debug.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:24.019 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_dev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:24.019 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_devargs.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:24.019 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_eal.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:24.019 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_eal_memconfig.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:24.019 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_eal_trace.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:24.019 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_errno.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:24.019 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_epoll.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:24.019 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_fbarray.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:24.019 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_hexdump.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:24.019 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_hypervisor.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:24.019 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_interrupts.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:24.019 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_keepalive.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:24.019 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_launch.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:24.019 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_lcore.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:24.019 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_lock_annotations.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:24.019 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_malloc.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:24.019 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_mcslock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:24.019 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_memory.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:24.019 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_memzone.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:24.019 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_pci_dev_feature_defs.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:24.019 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_pci_dev_features.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:24.019 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_per_lcore.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:24.019 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_pflock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:24.019 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_random.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:24.019 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_reciprocal.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:24.019 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_seqcount.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:24.019 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_seqlock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:24.019 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_service.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:24.019 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_service_component.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:24.019 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_stdatomic.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:24.019 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_string_fns.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:24.019 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_tailq.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:24.019 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_thread.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:24.019 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_ticketlock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:24.019 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_time.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:24.019 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_trace.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:24.019 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_trace_point.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:24.019 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_trace_point_register.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:24.019 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_uuid.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:24.019 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_version.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:24.020 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_vfio.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:24.020 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/linux/include/rte_os.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:24.020 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:24.020 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:24.020 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_elem.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:24.020 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_elem_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:24.020 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_c11_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:24.020 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_generic_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:24.020 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_hts.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:24.020 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_hts_elem_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:24.020 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_peek.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:24.020 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_peek_elem_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:24.020 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_peek_zc.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:24.020 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_rts.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:24.020 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_rts_elem_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:24.020 Installing /home/vagrant/spdk_repo/dpdk/lib/rcu/rte_rcu_qsbr.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:24.020 Installing /home/vagrant/spdk_repo/dpdk/lib/mempool/rte_mempool.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:24.020 Installing /home/vagrant/spdk_repo/dpdk/lib/mempool/rte_mempool_trace_fp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:24.020 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:24.020 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:24.020 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf_ptype.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:24.020 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf_pool_ops.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:24.020 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf_dyn.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:24.020 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ip.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:24.020 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_tcp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:24.020 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_udp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:24.020 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_tls.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:24.020 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_dtls.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:24.020 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_esp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:24.020 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_sctp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:24.020 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_icmp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:24.020 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_arp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:24.020 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ether.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:24.020 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_macsec.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:24.020 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_vxlan.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:24.020 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_gre.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:24.020 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_gtp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:24.020 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_net.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:24.020 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_net_crc.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:24.020 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_mpls.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:24.020 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_higig.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:24.020 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ecpri.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:24.020 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_pdcp_hdr.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:24.020 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_geneve.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:24.020 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_l2tpv2.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:24.020 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ppp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:24.020 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ib.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:24.020 Installing /home/vagrant/spdk_repo/dpdk/lib/meter/rte_meter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:24.020 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_cman.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:24.020 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_ethdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:24.020 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_ethdev_trace_fp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:24.020 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_dev_info.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:24.020 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_flow.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:24.020 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_flow_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:24.020 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_mtr.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:24.020 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_mtr_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:24.020 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_tm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:24.020 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_tm_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:24.020 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_ethdev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:24.020 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_eth_ctrl.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:24.020 Installing /home/vagrant/spdk_repo/dpdk/lib/pci/rte_pci.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:24.020 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:24.020 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:24.020 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_num.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:24.020 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_ipaddr.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:24.020 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_etheraddr.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:24.020 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_string.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:24.020 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_rdline.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:24.020 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_vt100.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:24.020 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_socket.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:24.020 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_cirbuf.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:24.020 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_portlist.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:24.020 Installing /home/vagrant/spdk_repo/dpdk/lib/metrics/rte_metrics.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:24.020 Installing /home/vagrant/spdk_repo/dpdk/lib/metrics/rte_metrics_telemetry.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:24.020 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_fbk_hash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:24.020 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_hash_crc.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:24.020 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_hash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:24.020 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_jhash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:24.020 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_thash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:24.020 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_thash_gfni.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:24.020 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_crc_arm64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:24.020 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_crc_generic.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:24.020 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_crc_sw.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:24.020 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_crc_x86.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:24.020 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_thash_x86_gfni.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:24.020 Installing /home/vagrant/spdk_repo/dpdk/lib/timer/rte_timer.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:24.020 Installing /home/vagrant/spdk_repo/dpdk/lib/acl/rte_acl.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:24.020 Installing /home/vagrant/spdk_repo/dpdk/lib/acl/rte_acl_osdep.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:24.020 Installing /home/vagrant/spdk_repo/dpdk/lib/bbdev/rte_bbdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:24.020 Installing /home/vagrant/spdk_repo/dpdk/lib/bbdev/rte_bbdev_pmd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:24.020 Installing /home/vagrant/spdk_repo/dpdk/lib/bbdev/rte_bbdev_op.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:24.020 Installing /home/vagrant/spdk_repo/dpdk/lib/bitratestats/rte_bitrate.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:24.020 Installing /home/vagrant/spdk_repo/dpdk/lib/bpf/bpf_def.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:24.020 Installing /home/vagrant/spdk_repo/dpdk/lib/bpf/rte_bpf.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:24.020 Installing /home/vagrant/spdk_repo/dpdk/lib/bpf/rte_bpf_ethdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:24.020 Installing /home/vagrant/spdk_repo/dpdk/lib/cfgfile/rte_cfgfile.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:24.020 Installing /home/vagrant/spdk_repo/dpdk/lib/compressdev/rte_compressdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:24.020 Installing /home/vagrant/spdk_repo/dpdk/lib/compressdev/rte_comp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:24.020 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_cryptodev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:24.020 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_cryptodev_trace_fp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:24.020 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_crypto.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:24.020 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_crypto_sym.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:24.020 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_crypto_asym.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:24.020 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_cryptodev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:24.020 Installing /home/vagrant/spdk_repo/dpdk/lib/distributor/rte_distributor.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:24.021 Installing /home/vagrant/spdk_repo/dpdk/lib/dmadev/rte_dmadev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:24.021 Installing /home/vagrant/spdk_repo/dpdk/lib/dmadev/rte_dmadev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:24.021 Installing /home/vagrant/spdk_repo/dpdk/lib/efd/rte_efd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:24.021 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_crypto_adapter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:24.021 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_dma_adapter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:24.021 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_eth_rx_adapter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:24.021 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_eth_tx_adapter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:24.021 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_ring.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:24.021 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_timer_adapter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:24.021 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_eventdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:24.021 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_eventdev_trace_fp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:24.021 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_eventdev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:24.021 Installing /home/vagrant/spdk_repo/dpdk/lib/dispatcher/rte_dispatcher.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:24.021 Installing /home/vagrant/spdk_repo/dpdk/lib/gpudev/rte_gpudev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:24.021 Installing /home/vagrant/spdk_repo/dpdk/lib/gro/rte_gro.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:24.021 Installing /home/vagrant/spdk_repo/dpdk/lib/gso/rte_gso.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:24.021 Installing /home/vagrant/spdk_repo/dpdk/lib/ip_frag/rte_ip_frag.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:24.021 Installing /home/vagrant/spdk_repo/dpdk/lib/jobstats/rte_jobstats.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:24.021 Installing /home/vagrant/spdk_repo/dpdk/lib/latencystats/rte_latencystats.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:24.021 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:24.021 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm6.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:24.021 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_altivec.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:24.021 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_neon.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:24.021 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_scalar.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:24.021 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_sse.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:24.021 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_sve.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:24.021 Installing /home/vagrant/spdk_repo/dpdk/lib/member/rte_member.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:24.021 Installing /home/vagrant/spdk_repo/dpdk/lib/pcapng/rte_pcapng.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:24.021 Installing /home/vagrant/spdk_repo/dpdk/lib/power/rte_power.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:24.021 Installing /home/vagrant/spdk_repo/dpdk/lib/power/rte_power_guest_channel.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:24.021 Installing /home/vagrant/spdk_repo/dpdk/lib/power/rte_power_pmd_mgmt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:24.021 Installing /home/vagrant/spdk_repo/dpdk/lib/power/rte_power_uncore.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:24.021 Installing /home/vagrant/spdk_repo/dpdk/lib/rawdev/rte_rawdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:24.021 Installing /home/vagrant/spdk_repo/dpdk/lib/rawdev/rte_rawdev_pmd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:24.021 Installing /home/vagrant/spdk_repo/dpdk/lib/regexdev/rte_regexdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:24.021 Installing /home/vagrant/spdk_repo/dpdk/lib/regexdev/rte_regexdev_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:24.021 Installing /home/vagrant/spdk_repo/dpdk/lib/regexdev/rte_regexdev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:24.021 Installing /home/vagrant/spdk_repo/dpdk/lib/mldev/rte_mldev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:24.021 Installing /home/vagrant/spdk_repo/dpdk/lib/mldev/rte_mldev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:24.021 Installing /home/vagrant/spdk_repo/dpdk/lib/rib/rte_rib.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:24.021 Installing /home/vagrant/spdk_repo/dpdk/lib/rib/rte_rib6.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:24.021 Installing /home/vagrant/spdk_repo/dpdk/lib/reorder/rte_reorder.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:24.021 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_approx.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:24.021 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_red.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:24.021 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_sched.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:24.021 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_sched_common.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:24.021 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_pie.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:24.021 Installing /home/vagrant/spdk_repo/dpdk/lib/security/rte_security.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:24.021 Installing /home/vagrant/spdk_repo/dpdk/lib/security/rte_security_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:24.021 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:24.021 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_std.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:24.021 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_lf.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:24.021 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_lf_generic.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:24.021 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_lf_c11.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:24.021 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_lf_stubs.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:24.021 Installing /home/vagrant/spdk_repo/dpdk/lib/vhost/rte_vdpa.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:24.021 Installing /home/vagrant/spdk_repo/dpdk/lib/vhost/rte_vhost.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:24.021 Installing /home/vagrant/spdk_repo/dpdk/lib/vhost/rte_vhost_async.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:24.021 Installing /home/vagrant/spdk_repo/dpdk/lib/vhost/rte_vhost_crypto.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:24.021 Installing /home/vagrant/spdk_repo/dpdk/lib/ipsec/rte_ipsec.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:24.021 Installing /home/vagrant/spdk_repo/dpdk/lib/ipsec/rte_ipsec_sa.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:24.021 Installing /home/vagrant/spdk_repo/dpdk/lib/ipsec/rte_ipsec_sad.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:24.021 Installing /home/vagrant/spdk_repo/dpdk/lib/ipsec/rte_ipsec_group.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:24.021 Installing /home/vagrant/spdk_repo/dpdk/lib/pdcp/rte_pdcp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:24.021 Installing /home/vagrant/spdk_repo/dpdk/lib/pdcp/rte_pdcp_group.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:24.021 Installing /home/vagrant/spdk_repo/dpdk/lib/fib/rte_fib.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:24.021 Installing /home/vagrant/spdk_repo/dpdk/lib/fib/rte_fib6.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:24.021 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_ethdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:24.021 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_fd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:24.021 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_frag.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:24.021 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_ras.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:24.021 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:24.021 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_ring.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:24.021 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_sched.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:24.021 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_source_sink.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:24.021 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_sym_crypto.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:24.021 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_eventdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:24.021 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:24.021 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port_ethdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:24.021 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port_fd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:24.021 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port_ring.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:24.021 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port_source_sink.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:24.021 Installing /home/vagrant/spdk_repo/dpdk/lib/pdump/rte_pdump.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:24.021 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_lru.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:24.021 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_hash_func.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:24.021 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:24.021 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table_em.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:24.021 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table_learner.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:24.021 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table_selector.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:24.021 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table_wm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:24.021 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:24.021 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_acl.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:24.021 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_array.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:24.021 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_hash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:24.021 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_hash_cuckoo.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:24.021 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_hash_func.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:24.021 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_lpm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:24.021 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_lpm_ipv6.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:24.021 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_stub.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:24.021 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_lru_arm64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:24.021 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_lru_x86.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:24.021 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_hash_func_arm64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:24.021 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_pipeline.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:24.021 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_port_in_action.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:24.022 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_table_action.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:24.022 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_swx_ipsec.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:24.022 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_swx_pipeline.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:24.022 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_swx_extern.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:24.022 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_swx_ctl.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:24.022 Installing /home/vagrant/spdk_repo/dpdk/lib/graph/rte_graph.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:24.022 Installing /home/vagrant/spdk_repo/dpdk/lib/graph/rte_graph_worker.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:24.022 Installing /home/vagrant/spdk_repo/dpdk/lib/graph/rte_graph_model_mcore_dispatch.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:24.022 Installing /home/vagrant/spdk_repo/dpdk/lib/graph/rte_graph_model_rtc.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:24.022 Installing /home/vagrant/spdk_repo/dpdk/lib/graph/rte_graph_worker_common.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:24.022 Installing /home/vagrant/spdk_repo/dpdk/lib/node/rte_node_eth_api.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:24.022 Installing /home/vagrant/spdk_repo/dpdk/lib/node/rte_node_ip4_api.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:24.022 Installing /home/vagrant/spdk_repo/dpdk/lib/node/rte_node_ip6_api.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:24.022 Installing /home/vagrant/spdk_repo/dpdk/lib/node/rte_node_udp4_input_api.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:24.022 Installing /home/vagrant/spdk_repo/dpdk/drivers/bus/pci/rte_bus_pci.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:24.022 Installing /home/vagrant/spdk_repo/dpdk/drivers/bus/vdev/rte_bus_vdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:24.022 Installing /home/vagrant/spdk_repo/dpdk/drivers/net/i40e/rte_pmd_i40e.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:24.022 Installing /home/vagrant/spdk_repo/dpdk/buildtools/dpdk-cmdline-gen.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:24.022 Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-devbind.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:24.022 Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-pmdinfo.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:24.022 Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-telemetry.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:24.022 Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-hugepages.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:24.022 Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-rss-flows.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:24.022 Installing /home/vagrant/spdk_repo/dpdk/build-tmp/rte_build_config.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:24.022 Installing /home/vagrant/spdk_repo/dpdk/build-tmp/meson-private/libdpdk-libs.pc to /home/vagrant/spdk_repo/dpdk/build/lib/pkgconfig 00:03:24.022 Installing /home/vagrant/spdk_repo/dpdk/build-tmp/meson-private/libdpdk.pc to /home/vagrant/spdk_repo/dpdk/build/lib/pkgconfig 00:03:24.022 Installing symlink pointing to librte_log.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_log.so.24 00:03:24.022 Installing symlink pointing to librte_log.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_log.so 00:03:24.022 Installing symlink pointing to librte_kvargs.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_kvargs.so.24 00:03:24.022 Installing symlink pointing to librte_kvargs.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_kvargs.so 00:03:24.022 Installing symlink pointing to librte_telemetry.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_telemetry.so.24 00:03:24.022 Installing symlink pointing to librte_telemetry.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_telemetry.so 00:03:24.022 Installing symlink pointing to librte_eal.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_eal.so.24 00:03:24.022 Installing symlink pointing to librte_eal.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_eal.so 00:03:24.022 Installing symlink pointing to librte_ring.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ring.so.24 00:03:24.022 Installing symlink pointing to librte_ring.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ring.so 00:03:24.022 Installing symlink pointing to librte_rcu.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rcu.so.24 00:03:24.022 Installing symlink pointing to librte_rcu.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rcu.so 00:03:24.022 Installing symlink pointing to librte_mempool.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mempool.so.24 00:03:24.022 Installing symlink pointing to librte_mempool.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mempool.so 00:03:24.022 Installing symlink pointing to librte_mbuf.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mbuf.so.24 00:03:24.022 Installing symlink pointing to librte_mbuf.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mbuf.so 00:03:24.022 Installing symlink pointing to librte_net.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_net.so.24 00:03:24.022 Installing symlink pointing to librte_net.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_net.so 00:03:24.022 Installing symlink pointing to librte_meter.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_meter.so.24 00:03:24.022 Installing symlink pointing to librte_meter.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_meter.so 00:03:24.022 Installing symlink pointing to librte_ethdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ethdev.so.24 00:03:24.022 Installing symlink pointing to librte_ethdev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ethdev.so 00:03:24.022 Installing symlink pointing to librte_pci.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pci.so.24 00:03:24.022 Installing symlink pointing to librte_pci.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pci.so 00:03:24.022 Installing symlink pointing to librte_cmdline.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cmdline.so.24 00:03:24.022 Installing symlink pointing to librte_cmdline.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cmdline.so 00:03:24.022 Installing symlink pointing to librte_metrics.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_metrics.so.24 00:03:24.022 Installing symlink pointing to librte_metrics.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_metrics.so 00:03:24.022 Installing symlink pointing to librte_hash.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_hash.so.24 00:03:24.022 Installing symlink pointing to librte_hash.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_hash.so 00:03:24.022 Installing symlink pointing to librte_timer.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_timer.so.24 00:03:24.022 Installing symlink pointing to librte_timer.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_timer.so 00:03:24.022 Installing symlink pointing to librte_acl.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_acl.so.24 00:03:24.022 Installing symlink pointing to librte_acl.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_acl.so 00:03:24.022 Installing symlink pointing to librte_bbdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bbdev.so.24 00:03:24.022 Installing symlink pointing to librte_bbdev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bbdev.so 00:03:24.022 Installing symlink pointing to librte_bitratestats.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bitratestats.so.24 00:03:24.022 Installing symlink pointing to librte_bitratestats.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bitratestats.so 00:03:24.022 Installing symlink pointing to librte_bpf.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bpf.so.24 00:03:24.022 Installing symlink pointing to librte_bpf.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bpf.so 00:03:24.022 Installing symlink pointing to librte_cfgfile.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cfgfile.so.24 00:03:24.022 Installing symlink pointing to librte_cfgfile.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cfgfile.so 00:03:24.022 Installing symlink pointing to librte_compressdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_compressdev.so.24 00:03:24.022 Installing symlink pointing to librte_compressdev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_compressdev.so 00:03:24.022 Installing symlink pointing to librte_cryptodev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cryptodev.so.24 00:03:24.022 Installing symlink pointing to librte_cryptodev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cryptodev.so 00:03:24.022 Installing symlink pointing to librte_distributor.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_distributor.so.24 00:03:24.022 Installing symlink pointing to librte_distributor.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_distributor.so 00:03:24.022 Installing symlink pointing to librte_dmadev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_dmadev.so.24 00:03:24.022 Installing symlink pointing to librte_dmadev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_dmadev.so 00:03:24.022 Installing symlink pointing to librte_efd.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_efd.so.24 00:03:24.022 Installing symlink pointing to librte_efd.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_efd.so 00:03:24.022 Installing symlink pointing to librte_eventdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_eventdev.so.24 00:03:24.022 Installing symlink pointing to librte_eventdev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_eventdev.so 00:03:24.022 Installing symlink pointing to librte_dispatcher.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_dispatcher.so.24 00:03:24.022 Installing symlink pointing to librte_dispatcher.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_dispatcher.so 00:03:24.022 Installing symlink pointing to librte_gpudev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gpudev.so.24 00:03:24.022 Installing symlink pointing to librte_gpudev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gpudev.so 00:03:24.022 Installing symlink pointing to librte_gro.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gro.so.24 00:03:24.022 Installing symlink pointing to librte_gro.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gro.so 00:03:24.022 Installing symlink pointing to librte_gso.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gso.so.24 00:03:24.022 Installing symlink pointing to librte_gso.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gso.so 00:03:24.022 Installing symlink pointing to librte_ip_frag.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ip_frag.so.24 00:03:24.022 Installing symlink pointing to librte_ip_frag.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ip_frag.so 00:03:24.022 Installing symlink pointing to librte_jobstats.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_jobstats.so.24 00:03:24.022 Installing symlink pointing to librte_jobstats.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_jobstats.so 00:03:24.022 Installing symlink pointing to librte_latencystats.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_latencystats.so.24 00:03:24.022 Installing symlink pointing to librte_latencystats.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_latencystats.so 00:03:24.022 Installing symlink pointing to librte_lpm.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_lpm.so.24 00:03:24.022 Installing symlink pointing to librte_lpm.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_lpm.so 00:03:24.022 Installing symlink pointing to librte_member.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_member.so.24 00:03:24.022 Installing symlink pointing to librte_member.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_member.so 00:03:24.022 Installing symlink pointing to librte_pcapng.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pcapng.so.24 00:03:24.022 Installing symlink pointing to librte_pcapng.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pcapng.so 00:03:24.022 Installing symlink pointing to librte_power.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_power.so.24 00:03:24.022 Installing symlink pointing to librte_power.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_power.so 00:03:24.022 Installing symlink pointing to librte_rawdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rawdev.so.24 00:03:24.022 Installing symlink pointing to librte_rawdev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rawdev.so 00:03:24.023 Installing symlink pointing to librte_regexdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_regexdev.so.24 00:03:24.023 Installing symlink pointing to librte_regexdev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_regexdev.so 00:03:24.023 Installing symlink pointing to librte_mldev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mldev.so.24 00:03:24.023 Installing symlink pointing to librte_mldev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mldev.so 00:03:24.023 Installing symlink pointing to librte_rib.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rib.so.24 00:03:24.023 Installing symlink pointing to librte_rib.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rib.so 00:03:24.023 Installing symlink pointing to librte_reorder.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_reorder.so.24 00:03:24.023 Installing symlink pointing to librte_reorder.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_reorder.so 00:03:24.023 Installing symlink pointing to librte_sched.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_sched.so.24 00:03:24.023 Installing symlink pointing to librte_sched.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_sched.so 00:03:24.023 Installing symlink pointing to librte_security.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_security.so.24 00:03:24.023 Installing symlink pointing to librte_security.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_security.so 00:03:24.023 './librte_bus_pci.so' -> 'dpdk/pmds-24.0/librte_bus_pci.so' 00:03:24.023 './librte_bus_pci.so.24' -> 'dpdk/pmds-24.0/librte_bus_pci.so.24' 00:03:24.023 './librte_bus_pci.so.24.0' -> 'dpdk/pmds-24.0/librte_bus_pci.so.24.0' 00:03:24.023 './librte_bus_vdev.so' -> 'dpdk/pmds-24.0/librte_bus_vdev.so' 00:03:24.023 './librte_bus_vdev.so.24' -> 'dpdk/pmds-24.0/librte_bus_vdev.so.24' 00:03:24.023 './librte_bus_vdev.so.24.0' -> 'dpdk/pmds-24.0/librte_bus_vdev.so.24.0' 00:03:24.023 './librte_mempool_ring.so' -> 'dpdk/pmds-24.0/librte_mempool_ring.so' 00:03:24.023 './librte_mempool_ring.so.24' -> 'dpdk/pmds-24.0/librte_mempool_ring.so.24' 00:03:24.023 './librte_mempool_ring.so.24.0' -> 'dpdk/pmds-24.0/librte_mempool_ring.so.24.0' 00:03:24.023 './librte_net_i40e.so' -> 'dpdk/pmds-24.0/librte_net_i40e.so' 00:03:24.023 './librte_net_i40e.so.24' -> 'dpdk/pmds-24.0/librte_net_i40e.so.24' 00:03:24.023 './librte_net_i40e.so.24.0' -> 'dpdk/pmds-24.0/librte_net_i40e.so.24.0' 00:03:24.023 Installing symlink pointing to librte_stack.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_stack.so.24 00:03:24.023 Installing symlink pointing to librte_stack.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_stack.so 00:03:24.023 Installing symlink pointing to librte_vhost.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_vhost.so.24 00:03:24.023 Installing symlink pointing to librte_vhost.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_vhost.so 00:03:24.023 Installing symlink pointing to librte_ipsec.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ipsec.so.24 00:03:24.023 Installing symlink pointing to librte_ipsec.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ipsec.so 00:03:24.023 Installing symlink pointing to librte_pdcp.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pdcp.so.24 00:03:24.023 Installing symlink pointing to librte_pdcp.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pdcp.so 00:03:24.023 Installing symlink pointing to librte_fib.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_fib.so.24 00:03:24.023 Installing symlink pointing to librte_fib.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_fib.so 00:03:24.023 Installing symlink pointing to librte_port.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_port.so.24 00:03:24.023 Installing symlink pointing to librte_port.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_port.so 00:03:24.023 Installing symlink pointing to librte_pdump.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pdump.so.24 00:03:24.023 Installing symlink pointing to librte_pdump.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pdump.so 00:03:24.023 Installing symlink pointing to librte_table.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_table.so.24 00:03:24.023 Installing symlink pointing to librte_table.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_table.so 00:03:24.023 Installing symlink pointing to librte_pipeline.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pipeline.so.24 00:03:24.023 Installing symlink pointing to librte_pipeline.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pipeline.so 00:03:24.023 Installing symlink pointing to librte_graph.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_graph.so.24 00:03:24.023 Installing symlink pointing to librte_graph.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_graph.so 00:03:24.023 Installing symlink pointing to librte_node.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_node.so.24 00:03:24.023 Installing symlink pointing to librte_node.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_node.so 00:03:24.023 Installing symlink pointing to librte_bus_pci.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_pci.so.24 00:03:24.023 Installing symlink pointing to librte_bus_pci.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_pci.so 00:03:24.023 Installing symlink pointing to librte_bus_vdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_vdev.so.24 00:03:24.023 Installing symlink pointing to librte_bus_vdev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_vdev.so 00:03:24.023 Installing symlink pointing to librte_mempool_ring.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_mempool_ring.so.24 00:03:24.023 Installing symlink pointing to librte_mempool_ring.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_mempool_ring.so 00:03:24.023 Installing symlink pointing to librte_net_i40e.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_net_i40e.so.24 00:03:24.023 Installing symlink pointing to librte_net_i40e.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_net_i40e.so 00:03:24.023 Running custom install script '/bin/sh /home/vagrant/spdk_repo/dpdk/config/../buildtools/symlink-drivers-solibs.sh lib dpdk/pmds-24.0' 00:03:24.023 14:43:57 -- common/autobuild_common.sh@192 -- $ uname -s 00:03:24.023 14:43:57 -- common/autobuild_common.sh@192 -- $ [[ Linux == \F\r\e\e\B\S\D ]] 00:03:24.023 14:43:57 -- common/autobuild_common.sh@203 -- $ cat 00:03:24.023 14:43:57 -- common/autobuild_common.sh@208 -- $ cd /home/vagrant/spdk_repo/spdk 00:03:24.023 00:03:24.023 real 0m55.845s 00:03:24.023 user 6m37.109s 00:03:24.023 sys 1m7.059s 00:03:24.023 14:43:57 -- common/autotest_common.sh@1115 -- $ xtrace_disable 00:03:24.023 14:43:57 -- common/autotest_common.sh@10 -- $ set +x 00:03:24.024 ************************************ 00:03:24.024 END TEST build_native_dpdk 00:03:24.024 ************************************ 00:03:24.024 14:43:57 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:03:24.024 14:43:57 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:03:24.024 14:43:57 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:03:24.024 14:43:57 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:03:24.024 14:43:57 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:03:24.024 14:43:57 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:03:24.024 14:43:57 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:03:24.024 14:43:57 -- spdk/autobuild.sh@67 -- $ /home/vagrant/spdk_repo/spdk/configure --enable-debug --enable-werror --with-rdma --with-usdt --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-dpdk=/home/vagrant/spdk_repo/dpdk/build --with-avahi --with-golang --with-shared 00:03:24.282 Using /home/vagrant/spdk_repo/dpdk/build/lib/pkgconfig for additional libs... 00:03:24.282 DPDK libraries: /home/vagrant/spdk_repo/dpdk/build/lib 00:03:24.282 DPDK includes: //home/vagrant/spdk_repo/dpdk/build/include 00:03:24.282 Using default SPDK env in /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:03:24.848 Using 'verbs' RDMA provider 00:03:40.342 Configuring ISA-L (logfile: /home/vagrant/spdk_repo/spdk/isa-l/spdk-isal.log)...done. 00:03:52.545 Configuring ISA-L-crypto (logfile: /home/vagrant/spdk_repo/spdk/isa-l-crypto/spdk-isal-crypto.log)...done. 00:03:52.545 go version go1.21.1 linux/amd64 00:03:52.804 Creating mk/config.mk...done. 00:03:52.804 Creating mk/cc.flags.mk...done. 00:03:52.804 Type 'make' to build. 00:03:52.804 14:44:25 -- spdk/autobuild.sh@69 -- $ run_test make make -j10 00:03:52.804 14:44:25 -- common/autotest_common.sh@1087 -- $ '[' 3 -le 1 ']' 00:03:52.804 14:44:25 -- common/autotest_common.sh@1093 -- $ xtrace_disable 00:03:52.804 14:44:25 -- common/autotest_common.sh@10 -- $ set +x 00:03:52.804 ************************************ 00:03:52.804 START TEST make 00:03:52.804 ************************************ 00:03:52.804 14:44:25 -- common/autotest_common.sh@1114 -- $ make -j10 00:03:53.371 make[1]: Nothing to be done for 'all'. 00:04:15.309 CC lib/ut_mock/mock.o 00:04:15.309 CC lib/ut/ut.o 00:04:15.309 CC lib/log/log.o 00:04:15.309 CC lib/log/log_flags.o 00:04:15.309 CC lib/log/log_deprecated.o 00:04:15.309 LIB libspdk_ut_mock.a 00:04:15.309 LIB libspdk_ut.a 00:04:15.309 SO libspdk_ut_mock.so.5.0 00:04:15.309 LIB libspdk_log.a 00:04:15.309 SO libspdk_ut.so.1.0 00:04:15.309 SO libspdk_log.so.6.1 00:04:15.309 SYMLINK libspdk_ut_mock.so 00:04:15.309 SYMLINK libspdk_ut.so 00:04:15.309 SYMLINK libspdk_log.so 00:04:15.309 CC lib/ioat/ioat.o 00:04:15.309 CXX lib/trace_parser/trace.o 00:04:15.309 CC lib/dma/dma.o 00:04:15.309 CC lib/util/base64.o 00:04:15.309 CC lib/util/cpuset.o 00:04:15.309 CC lib/util/crc16.o 00:04:15.309 CC lib/util/bit_array.o 00:04:15.309 CC lib/util/crc32.o 00:04:15.309 CC lib/util/crc32c.o 00:04:15.309 CC lib/vfio_user/host/vfio_user_pci.o 00:04:15.309 CC lib/util/crc32_ieee.o 00:04:15.309 CC lib/vfio_user/host/vfio_user.o 00:04:15.309 CC lib/util/crc64.o 00:04:15.309 CC lib/util/dif.o 00:04:15.309 LIB libspdk_dma.a 00:04:15.309 CC lib/util/fd.o 00:04:15.309 SO libspdk_dma.so.3.0 00:04:15.309 CC lib/util/file.o 00:04:15.309 SYMLINK libspdk_dma.so 00:04:15.309 CC lib/util/hexlify.o 00:04:15.309 CC lib/util/iov.o 00:04:15.309 CC lib/util/math.o 00:04:15.309 LIB libspdk_ioat.a 00:04:15.309 SO libspdk_ioat.so.6.0 00:04:15.309 LIB libspdk_vfio_user.a 00:04:15.309 CC lib/util/pipe.o 00:04:15.309 CC lib/util/strerror_tls.o 00:04:15.309 SO libspdk_vfio_user.so.4.0 00:04:15.309 SYMLINK libspdk_ioat.so 00:04:15.309 CC lib/util/string.o 00:04:15.309 CC lib/util/uuid.o 00:04:15.309 SYMLINK libspdk_vfio_user.so 00:04:15.309 CC lib/util/fd_group.o 00:04:15.309 CC lib/util/xor.o 00:04:15.309 CC lib/util/zipf.o 00:04:15.309 LIB libspdk_util.a 00:04:15.309 SO libspdk_util.so.8.0 00:04:15.567 SYMLINK libspdk_util.so 00:04:15.567 LIB libspdk_trace_parser.a 00:04:15.567 SO libspdk_trace_parser.so.4.0 00:04:15.567 CC lib/idxd/idxd.o 00:04:15.567 CC lib/idxd/idxd_user.o 00:04:15.567 CC lib/idxd/idxd_kernel.o 00:04:15.567 CC lib/conf/conf.o 00:04:15.567 CC lib/env_dpdk/env.o 00:04:15.567 CC lib/env_dpdk/memory.o 00:04:15.567 CC lib/json/json_parse.o 00:04:15.567 CC lib/vmd/vmd.o 00:04:15.567 CC lib/rdma/common.o 00:04:15.568 SYMLINK libspdk_trace_parser.so 00:04:15.568 CC lib/rdma/rdma_verbs.o 00:04:15.826 CC lib/vmd/led.o 00:04:15.826 LIB libspdk_conf.a 00:04:15.826 CC lib/json/json_util.o 00:04:15.826 CC lib/json/json_write.o 00:04:15.826 SO libspdk_conf.so.5.0 00:04:15.826 CC lib/env_dpdk/pci.o 00:04:15.826 LIB libspdk_rdma.a 00:04:15.827 SYMLINK libspdk_conf.so 00:04:15.827 CC lib/env_dpdk/init.o 00:04:15.827 CC lib/env_dpdk/threads.o 00:04:15.827 SO libspdk_rdma.so.5.0 00:04:15.827 SYMLINK libspdk_rdma.so 00:04:15.827 CC lib/env_dpdk/pci_ioat.o 00:04:16.085 CC lib/env_dpdk/pci_virtio.o 00:04:16.085 CC lib/env_dpdk/pci_vmd.o 00:04:16.085 LIB libspdk_json.a 00:04:16.085 LIB libspdk_idxd.a 00:04:16.085 CC lib/env_dpdk/pci_idxd.o 00:04:16.085 SO libspdk_json.so.5.1 00:04:16.085 SO libspdk_idxd.so.11.0 00:04:16.085 CC lib/env_dpdk/pci_event.o 00:04:16.085 CC lib/env_dpdk/sigbus_handler.o 00:04:16.085 SYMLINK libspdk_json.so 00:04:16.085 CC lib/env_dpdk/pci_dpdk.o 00:04:16.085 SYMLINK libspdk_idxd.so 00:04:16.085 CC lib/env_dpdk/pci_dpdk_2207.o 00:04:16.085 LIB libspdk_vmd.a 00:04:16.085 CC lib/env_dpdk/pci_dpdk_2211.o 00:04:16.085 SO libspdk_vmd.so.5.0 00:04:16.344 SYMLINK libspdk_vmd.so 00:04:16.344 CC lib/jsonrpc/jsonrpc_server.o 00:04:16.344 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:04:16.344 CC lib/jsonrpc/jsonrpc_client.o 00:04:16.344 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:04:16.603 LIB libspdk_jsonrpc.a 00:04:16.603 SO libspdk_jsonrpc.so.5.1 00:04:16.603 SYMLINK libspdk_jsonrpc.so 00:04:16.862 LIB libspdk_env_dpdk.a 00:04:16.862 CC lib/rpc/rpc.o 00:04:16.862 SO libspdk_env_dpdk.so.13.0 00:04:16.862 LIB libspdk_rpc.a 00:04:17.121 SO libspdk_rpc.so.5.0 00:04:17.121 SYMLINK libspdk_env_dpdk.so 00:04:17.121 SYMLINK libspdk_rpc.so 00:04:17.121 CC lib/notify/notify.o 00:04:17.121 CC lib/notify/notify_rpc.o 00:04:17.121 CC lib/trace/trace.o 00:04:17.121 CC lib/trace/trace_rpc.o 00:04:17.121 CC lib/trace/trace_flags.o 00:04:17.121 CC lib/sock/sock.o 00:04:17.121 CC lib/sock/sock_rpc.o 00:04:17.379 LIB libspdk_notify.a 00:04:17.379 SO libspdk_notify.so.5.0 00:04:17.379 SYMLINK libspdk_notify.so 00:04:17.379 LIB libspdk_trace.a 00:04:17.638 SO libspdk_trace.so.9.0 00:04:17.638 LIB libspdk_sock.a 00:04:17.638 SYMLINK libspdk_trace.so 00:04:17.638 SO libspdk_sock.so.8.0 00:04:17.638 SYMLINK libspdk_sock.so 00:04:17.638 CC lib/thread/thread.o 00:04:17.638 CC lib/thread/iobuf.o 00:04:17.897 CC lib/nvme/nvme_ctrlr_cmd.o 00:04:17.897 CC lib/nvme/nvme_ctrlr.o 00:04:17.897 CC lib/nvme/nvme_ns_cmd.o 00:04:17.897 CC lib/nvme/nvme_fabric.o 00:04:17.897 CC lib/nvme/nvme_ns.o 00:04:17.897 CC lib/nvme/nvme_pcie_common.o 00:04:17.897 CC lib/nvme/nvme_qpair.o 00:04:17.897 CC lib/nvme/nvme_pcie.o 00:04:18.155 CC lib/nvme/nvme.o 00:04:18.413 CC lib/nvme/nvme_quirks.o 00:04:18.671 CC lib/nvme/nvme_transport.o 00:04:18.671 CC lib/nvme/nvme_discovery.o 00:04:18.671 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:04:18.671 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:04:18.671 CC lib/nvme/nvme_tcp.o 00:04:18.929 CC lib/nvme/nvme_opal.o 00:04:18.929 CC lib/nvme/nvme_io_msg.o 00:04:18.929 CC lib/nvme/nvme_poll_group.o 00:04:19.188 CC lib/nvme/nvme_zns.o 00:04:19.188 LIB libspdk_thread.a 00:04:19.188 CC lib/nvme/nvme_cuse.o 00:04:19.188 SO libspdk_thread.so.9.0 00:04:19.188 CC lib/nvme/nvme_vfio_user.o 00:04:19.188 CC lib/nvme/nvme_rdma.o 00:04:19.445 SYMLINK libspdk_thread.so 00:04:19.445 CC lib/accel/accel.o 00:04:19.445 CC lib/blob/blobstore.o 00:04:19.445 CC lib/blob/request.o 00:04:19.445 CC lib/blob/zeroes.o 00:04:19.704 CC lib/blob/blob_bs_dev.o 00:04:19.704 CC lib/accel/accel_rpc.o 00:04:19.962 CC lib/accel/accel_sw.o 00:04:19.962 CC lib/init/json_config.o 00:04:19.962 CC lib/init/subsystem.o 00:04:19.962 CC lib/virtio/virtio.o 00:04:19.962 CC lib/virtio/virtio_vhost_user.o 00:04:19.962 CC lib/virtio/virtio_vfio_user.o 00:04:19.962 CC lib/init/subsystem_rpc.o 00:04:19.962 CC lib/init/rpc.o 00:04:20.220 CC lib/virtio/virtio_pci.o 00:04:20.220 LIB libspdk_init.a 00:04:20.220 SO libspdk_init.so.4.0 00:04:20.220 SYMLINK libspdk_init.so 00:04:20.478 LIB libspdk_virtio.a 00:04:20.478 SO libspdk_virtio.so.6.0 00:04:20.478 LIB libspdk_accel.a 00:04:20.478 CC lib/event/app.o 00:04:20.478 CC lib/event/reactor.o 00:04:20.478 CC lib/event/log_rpc.o 00:04:20.478 CC lib/event/app_rpc.o 00:04:20.478 CC lib/event/scheduler_static.o 00:04:20.478 SO libspdk_accel.so.14.0 00:04:20.478 SYMLINK libspdk_virtio.so 00:04:20.478 SYMLINK libspdk_accel.so 00:04:20.736 CC lib/bdev/bdev.o 00:04:20.736 CC lib/bdev/bdev_rpc.o 00:04:20.736 CC lib/bdev/scsi_nvme.o 00:04:20.736 CC lib/bdev/part.o 00:04:20.736 CC lib/bdev/bdev_zone.o 00:04:20.736 LIB libspdk_nvme.a 00:04:20.736 LIB libspdk_event.a 00:04:20.994 SO libspdk_nvme.so.12.0 00:04:20.994 SO libspdk_event.so.12.0 00:04:20.994 SYMLINK libspdk_event.so 00:04:20.994 SYMLINK libspdk_nvme.so 00:04:21.929 LIB libspdk_blob.a 00:04:21.929 SO libspdk_blob.so.10.1 00:04:21.929 SYMLINK libspdk_blob.so 00:04:22.188 CC lib/lvol/lvol.o 00:04:22.188 CC lib/blobfs/blobfs.o 00:04:22.188 CC lib/blobfs/tree.o 00:04:22.754 LIB libspdk_bdev.a 00:04:23.013 SO libspdk_bdev.so.14.0 00:04:23.013 LIB libspdk_lvol.a 00:04:23.013 SO libspdk_lvol.so.9.1 00:04:23.013 LIB libspdk_blobfs.a 00:04:23.013 SYMLINK libspdk_bdev.so 00:04:23.013 SYMLINK libspdk_lvol.so 00:04:23.013 SO libspdk_blobfs.so.9.0 00:04:23.013 SYMLINK libspdk_blobfs.so 00:04:23.013 CC lib/ublk/ublk.o 00:04:23.013 CC lib/nbd/nbd.o 00:04:23.013 CC lib/nbd/nbd_rpc.o 00:04:23.013 CC lib/ublk/ublk_rpc.o 00:04:23.271 CC lib/scsi/dev.o 00:04:23.271 CC lib/scsi/lun.o 00:04:23.271 CC lib/scsi/port.o 00:04:23.271 CC lib/nvmf/ctrlr.o 00:04:23.271 CC lib/scsi/scsi.o 00:04:23.271 CC lib/ftl/ftl_core.o 00:04:23.271 CC lib/scsi/scsi_bdev.o 00:04:23.271 CC lib/scsi/scsi_pr.o 00:04:23.271 CC lib/scsi/scsi_rpc.o 00:04:23.271 CC lib/scsi/task.o 00:04:23.529 CC lib/nvmf/ctrlr_discovery.o 00:04:23.529 CC lib/nvmf/ctrlr_bdev.o 00:04:23.529 CC lib/nvmf/subsystem.o 00:04:23.529 CC lib/ftl/ftl_init.o 00:04:23.529 CC lib/nvmf/nvmf.o 00:04:23.529 CC lib/nvmf/nvmf_rpc.o 00:04:23.529 LIB libspdk_nbd.a 00:04:23.529 SO libspdk_nbd.so.6.0 00:04:23.787 LIB libspdk_scsi.a 00:04:23.787 SYMLINK libspdk_nbd.so 00:04:23.787 CC lib/ftl/ftl_layout.o 00:04:23.787 CC lib/ftl/ftl_debug.o 00:04:23.787 SO libspdk_scsi.so.8.0 00:04:23.787 LIB libspdk_ublk.a 00:04:23.787 SO libspdk_ublk.so.2.0 00:04:23.787 SYMLINK libspdk_scsi.so 00:04:23.787 SYMLINK libspdk_ublk.so 00:04:23.787 CC lib/ftl/ftl_io.o 00:04:24.046 CC lib/ftl/ftl_sb.o 00:04:24.046 CC lib/nvmf/transport.o 00:04:24.046 CC lib/iscsi/conn.o 00:04:24.046 CC lib/iscsi/init_grp.o 00:04:24.046 CC lib/vhost/vhost.o 00:04:24.046 CC lib/vhost/vhost_rpc.o 00:04:24.046 CC lib/ftl/ftl_l2p.o 00:04:24.305 CC lib/ftl/ftl_l2p_flat.o 00:04:24.305 CC lib/vhost/vhost_scsi.o 00:04:24.305 CC lib/vhost/vhost_blk.o 00:04:24.563 CC lib/ftl/ftl_nv_cache.o 00:04:24.563 CC lib/ftl/ftl_band.o 00:04:24.563 CC lib/ftl/ftl_band_ops.o 00:04:24.563 CC lib/iscsi/iscsi.o 00:04:24.563 CC lib/iscsi/md5.o 00:04:24.563 CC lib/nvmf/tcp.o 00:04:24.563 CC lib/vhost/rte_vhost_user.o 00:04:24.822 CC lib/iscsi/param.o 00:04:24.822 CC lib/nvmf/rdma.o 00:04:24.822 CC lib/ftl/ftl_writer.o 00:04:24.822 CC lib/ftl/ftl_rq.o 00:04:25.081 CC lib/ftl/ftl_reloc.o 00:04:25.081 CC lib/iscsi/portal_grp.o 00:04:25.081 CC lib/iscsi/tgt_node.o 00:04:25.081 CC lib/iscsi/iscsi_subsystem.o 00:04:25.340 CC lib/iscsi/iscsi_rpc.o 00:04:25.340 CC lib/iscsi/task.o 00:04:25.340 CC lib/ftl/ftl_l2p_cache.o 00:04:25.340 CC lib/ftl/ftl_p2l.o 00:04:25.340 CC lib/ftl/mngt/ftl_mngt.o 00:04:25.599 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:04:25.599 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:04:25.599 CC lib/ftl/mngt/ftl_mngt_startup.o 00:04:25.599 LIB libspdk_vhost.a 00:04:25.599 CC lib/ftl/mngt/ftl_mngt_md.o 00:04:25.599 SO libspdk_vhost.so.7.1 00:04:25.599 CC lib/ftl/mngt/ftl_mngt_misc.o 00:04:25.599 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:04:25.599 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:04:25.858 CC lib/ftl/mngt/ftl_mngt_band.o 00:04:25.858 SYMLINK libspdk_vhost.so 00:04:25.858 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:04:25.858 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:04:25.858 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:04:25.858 LIB libspdk_iscsi.a 00:04:25.858 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:04:25.858 CC lib/ftl/utils/ftl_conf.o 00:04:25.858 CC lib/ftl/utils/ftl_md.o 00:04:25.858 CC lib/ftl/utils/ftl_mempool.o 00:04:25.858 SO libspdk_iscsi.so.7.0 00:04:26.117 CC lib/ftl/utils/ftl_bitmap.o 00:04:26.117 CC lib/ftl/utils/ftl_property.o 00:04:26.117 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:04:26.117 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:04:26.117 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:04:26.117 SYMLINK libspdk_iscsi.so 00:04:26.117 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:04:26.117 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:04:26.117 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:04:26.117 CC lib/ftl/upgrade/ftl_sb_v3.o 00:04:26.117 CC lib/ftl/upgrade/ftl_sb_v5.o 00:04:26.376 CC lib/ftl/nvc/ftl_nvc_dev.o 00:04:26.376 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:04:26.376 CC lib/ftl/base/ftl_base_dev.o 00:04:26.376 CC lib/ftl/base/ftl_base_bdev.o 00:04:26.376 CC lib/ftl/ftl_trace.o 00:04:26.634 LIB libspdk_ftl.a 00:04:26.634 LIB libspdk_nvmf.a 00:04:26.634 SO libspdk_nvmf.so.17.0 00:04:26.892 SO libspdk_ftl.so.8.0 00:04:26.892 SYMLINK libspdk_nvmf.so 00:04:26.892 SYMLINK libspdk_ftl.so 00:04:27.151 CC module/env_dpdk/env_dpdk_rpc.o 00:04:27.151 CC module/scheduler/gscheduler/gscheduler.o 00:04:27.410 CC module/accel/ioat/accel_ioat.o 00:04:27.410 CC module/scheduler/dynamic/scheduler_dynamic.o 00:04:27.410 CC module/accel/dsa/accel_dsa.o 00:04:27.410 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:04:27.410 CC module/accel/error/accel_error.o 00:04:27.410 CC module/sock/posix/posix.o 00:04:27.410 CC module/accel/iaa/accel_iaa.o 00:04:27.410 CC module/blob/bdev/blob_bdev.o 00:04:27.410 LIB libspdk_env_dpdk_rpc.a 00:04:27.410 SO libspdk_env_dpdk_rpc.so.5.0 00:04:27.410 SYMLINK libspdk_env_dpdk_rpc.so 00:04:27.410 CC module/accel/ioat/accel_ioat_rpc.o 00:04:27.410 LIB libspdk_scheduler_dpdk_governor.a 00:04:27.410 SO libspdk_scheduler_dpdk_governor.so.3.0 00:04:27.410 CC module/accel/error/accel_error_rpc.o 00:04:27.410 LIB libspdk_scheduler_gscheduler.a 00:04:27.410 LIB libspdk_scheduler_dynamic.a 00:04:27.410 CC module/accel/iaa/accel_iaa_rpc.o 00:04:27.410 SYMLINK libspdk_scheduler_dpdk_governor.so 00:04:27.410 SO libspdk_scheduler_gscheduler.so.3.0 00:04:27.410 CC module/accel/dsa/accel_dsa_rpc.o 00:04:27.410 SO libspdk_scheduler_dynamic.so.3.0 00:04:27.410 SYMLINK libspdk_scheduler_gscheduler.so 00:04:27.669 SYMLINK libspdk_scheduler_dynamic.so 00:04:27.669 LIB libspdk_accel_ioat.a 00:04:27.669 SO libspdk_accel_ioat.so.5.0 00:04:27.669 LIB libspdk_accel_error.a 00:04:27.669 LIB libspdk_accel_iaa.a 00:04:27.669 LIB libspdk_accel_dsa.a 00:04:27.669 LIB libspdk_blob_bdev.a 00:04:27.669 SYMLINK libspdk_accel_ioat.so 00:04:27.669 SO libspdk_accel_error.so.1.0 00:04:27.669 SO libspdk_accel_dsa.so.4.0 00:04:27.669 SO libspdk_accel_iaa.so.2.0 00:04:27.669 SO libspdk_blob_bdev.so.10.1 00:04:27.669 SYMLINK libspdk_accel_error.so 00:04:27.669 SYMLINK libspdk_accel_dsa.so 00:04:27.669 SYMLINK libspdk_blob_bdev.so 00:04:27.669 SYMLINK libspdk_accel_iaa.so 00:04:27.928 CC module/blobfs/bdev/blobfs_bdev.o 00:04:27.928 CC module/bdev/error/vbdev_error.o 00:04:27.928 CC module/bdev/malloc/bdev_malloc.o 00:04:27.928 CC module/bdev/delay/vbdev_delay.o 00:04:27.928 CC module/bdev/gpt/gpt.o 00:04:27.928 CC module/bdev/passthru/vbdev_passthru.o 00:04:27.928 CC module/bdev/null/bdev_null.o 00:04:27.928 CC module/bdev/lvol/vbdev_lvol.o 00:04:27.928 CC module/bdev/nvme/bdev_nvme.o 00:04:27.928 LIB libspdk_sock_posix.a 00:04:27.928 SO libspdk_sock_posix.so.5.0 00:04:28.198 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:04:28.198 SYMLINK libspdk_sock_posix.so 00:04:28.198 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:04:28.198 CC module/bdev/gpt/vbdev_gpt.o 00:04:28.198 CC module/bdev/error/vbdev_error_rpc.o 00:04:28.198 CC module/bdev/null/bdev_null_rpc.o 00:04:28.198 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:04:28.198 LIB libspdk_blobfs_bdev.a 00:04:28.198 CC module/bdev/delay/vbdev_delay_rpc.o 00:04:28.198 SO libspdk_blobfs_bdev.so.5.0 00:04:28.198 CC module/bdev/malloc/bdev_malloc_rpc.o 00:04:28.198 LIB libspdk_bdev_error.a 00:04:28.198 LIB libspdk_bdev_gpt.a 00:04:28.198 SYMLINK libspdk_blobfs_bdev.so 00:04:28.198 LIB libspdk_bdev_null.a 00:04:28.457 SO libspdk_bdev_error.so.5.0 00:04:28.457 LIB libspdk_bdev_passthru.a 00:04:28.457 CC module/bdev/nvme/bdev_nvme_rpc.o 00:04:28.457 SO libspdk_bdev_gpt.so.5.0 00:04:28.457 SO libspdk_bdev_null.so.5.0 00:04:28.457 SO libspdk_bdev_passthru.so.5.0 00:04:28.457 LIB libspdk_bdev_lvol.a 00:04:28.457 LIB libspdk_bdev_delay.a 00:04:28.457 SYMLINK libspdk_bdev_error.so 00:04:28.457 CC module/bdev/nvme/nvme_rpc.o 00:04:28.457 SYMLINK libspdk_bdev_gpt.so 00:04:28.457 SYMLINK libspdk_bdev_passthru.so 00:04:28.457 SO libspdk_bdev_lvol.so.5.0 00:04:28.457 SYMLINK libspdk_bdev_null.so 00:04:28.457 SO libspdk_bdev_delay.so.5.0 00:04:28.457 CC module/bdev/nvme/bdev_mdns_client.o 00:04:28.457 CC module/bdev/raid/bdev_raid.o 00:04:28.457 LIB libspdk_bdev_malloc.a 00:04:28.457 SYMLINK libspdk_bdev_delay.so 00:04:28.457 SYMLINK libspdk_bdev_lvol.so 00:04:28.457 SO libspdk_bdev_malloc.so.5.0 00:04:28.457 CC module/bdev/split/vbdev_split.o 00:04:28.457 CC module/bdev/zone_block/vbdev_zone_block.o 00:04:28.457 SYMLINK libspdk_bdev_malloc.so 00:04:28.457 CC module/bdev/split/vbdev_split_rpc.o 00:04:28.716 CC module/bdev/aio/bdev_aio.o 00:04:28.716 CC module/bdev/ftl/bdev_ftl.o 00:04:28.716 CC module/bdev/aio/bdev_aio_rpc.o 00:04:28.716 CC module/bdev/raid/bdev_raid_rpc.o 00:04:28.716 LIB libspdk_bdev_split.a 00:04:28.716 CC module/bdev/iscsi/bdev_iscsi.o 00:04:28.716 SO libspdk_bdev_split.so.5.0 00:04:28.976 CC module/bdev/raid/bdev_raid_sb.o 00:04:28.976 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:04:28.976 SYMLINK libspdk_bdev_split.so 00:04:28.976 CC module/bdev/nvme/vbdev_opal.o 00:04:28.976 LIB libspdk_bdev_aio.a 00:04:28.976 CC module/bdev/ftl/bdev_ftl_rpc.o 00:04:28.976 SO libspdk_bdev_aio.so.5.0 00:04:28.976 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:04:28.976 SYMLINK libspdk_bdev_aio.so 00:04:28.976 CC module/bdev/nvme/vbdev_opal_rpc.o 00:04:28.976 CC module/bdev/virtio/bdev_virtio_scsi.o 00:04:28.976 LIB libspdk_bdev_zone_block.a 00:04:28.976 SO libspdk_bdev_zone_block.so.5.0 00:04:28.976 CC module/bdev/virtio/bdev_virtio_blk.o 00:04:29.236 SYMLINK libspdk_bdev_zone_block.so 00:04:29.236 CC module/bdev/virtio/bdev_virtio_rpc.o 00:04:29.236 CC module/bdev/raid/raid0.o 00:04:29.236 CC module/bdev/raid/raid1.o 00:04:29.236 LIB libspdk_bdev_iscsi.a 00:04:29.236 LIB libspdk_bdev_ftl.a 00:04:29.236 SO libspdk_bdev_iscsi.so.5.0 00:04:29.236 SO libspdk_bdev_ftl.so.5.0 00:04:29.236 CC module/bdev/raid/concat.o 00:04:29.236 SYMLINK libspdk_bdev_iscsi.so 00:04:29.236 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:04:29.236 SYMLINK libspdk_bdev_ftl.so 00:04:29.495 LIB libspdk_bdev_raid.a 00:04:29.495 LIB libspdk_bdev_virtio.a 00:04:29.495 SO libspdk_bdev_raid.so.5.0 00:04:29.495 SO libspdk_bdev_virtio.so.5.0 00:04:29.754 SYMLINK libspdk_bdev_raid.so 00:04:29.754 SYMLINK libspdk_bdev_virtio.so 00:04:30.013 LIB libspdk_bdev_nvme.a 00:04:30.013 SO libspdk_bdev_nvme.so.6.0 00:04:30.013 SYMLINK libspdk_bdev_nvme.so 00:04:30.272 CC module/event/subsystems/vmd/vmd.o 00:04:30.272 CC module/event/subsystems/vmd/vmd_rpc.o 00:04:30.530 CC module/event/subsystems/sock/sock.o 00:04:30.530 CC module/event/subsystems/iobuf/iobuf.o 00:04:30.530 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:04:30.530 CC module/event/subsystems/scheduler/scheduler.o 00:04:30.530 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:04:30.530 LIB libspdk_event_sock.a 00:04:30.530 LIB libspdk_event_vmd.a 00:04:30.530 SO libspdk_event_sock.so.4.0 00:04:30.530 LIB libspdk_event_iobuf.a 00:04:30.530 LIB libspdk_event_scheduler.a 00:04:30.530 SO libspdk_event_vmd.so.5.0 00:04:30.530 SO libspdk_event_iobuf.so.2.0 00:04:30.530 LIB libspdk_event_vhost_blk.a 00:04:30.530 SYMLINK libspdk_event_sock.so 00:04:30.530 SO libspdk_event_scheduler.so.3.0 00:04:30.530 SO libspdk_event_vhost_blk.so.2.0 00:04:30.530 SYMLINK libspdk_event_vmd.so 00:04:30.530 SYMLINK libspdk_event_iobuf.so 00:04:30.530 SYMLINK libspdk_event_scheduler.so 00:04:30.530 SYMLINK libspdk_event_vhost_blk.so 00:04:30.789 CC module/event/subsystems/accel/accel.o 00:04:31.048 LIB libspdk_event_accel.a 00:04:31.048 SO libspdk_event_accel.so.5.0 00:04:31.048 SYMLINK libspdk_event_accel.so 00:04:31.307 CC module/event/subsystems/bdev/bdev.o 00:04:31.307 LIB libspdk_event_bdev.a 00:04:31.565 SO libspdk_event_bdev.so.5.0 00:04:31.565 SYMLINK libspdk_event_bdev.so 00:04:31.565 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:04:31.565 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:04:31.565 CC module/event/subsystems/scsi/scsi.o 00:04:31.565 CC module/event/subsystems/ublk/ublk.o 00:04:31.565 CC module/event/subsystems/nbd/nbd.o 00:04:31.824 LIB libspdk_event_ublk.a 00:04:31.824 LIB libspdk_event_nbd.a 00:04:31.824 LIB libspdk_event_scsi.a 00:04:31.824 SO libspdk_event_nbd.so.5.0 00:04:31.824 SO libspdk_event_ublk.so.2.0 00:04:31.824 SO libspdk_event_scsi.so.5.0 00:04:31.824 SYMLINK libspdk_event_nbd.so 00:04:31.824 SYMLINK libspdk_event_ublk.so 00:04:31.824 LIB libspdk_event_nvmf.a 00:04:31.824 SYMLINK libspdk_event_scsi.so 00:04:32.114 SO libspdk_event_nvmf.so.5.0 00:04:32.114 SYMLINK libspdk_event_nvmf.so 00:04:32.114 CC module/event/subsystems/iscsi/iscsi.o 00:04:32.114 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:04:32.444 LIB libspdk_event_vhost_scsi.a 00:04:32.444 LIB libspdk_event_iscsi.a 00:04:32.444 SO libspdk_event_vhost_scsi.so.2.0 00:04:32.444 SO libspdk_event_iscsi.so.5.0 00:04:32.444 SYMLINK libspdk_event_vhost_scsi.so 00:04:32.444 SYMLINK libspdk_event_iscsi.so 00:04:32.444 SO libspdk.so.5.0 00:04:32.444 SYMLINK libspdk.so 00:04:32.717 CXX app/trace/trace.o 00:04:32.717 CC examples/ioat/perf/perf.o 00:04:32.717 CC examples/nvme/hello_world/hello_world.o 00:04:32.717 CC examples/sock/hello_world/hello_sock.o 00:04:32.717 CC examples/accel/perf/accel_perf.o 00:04:32.717 CC examples/vmd/lsvmd/lsvmd.o 00:04:32.717 CC examples/bdev/hello_world/hello_bdev.o 00:04:32.717 CC examples/nvmf/nvmf/nvmf.o 00:04:32.717 CC test/accel/dif/dif.o 00:04:32.717 CC examples/blob/hello_world/hello_blob.o 00:04:32.979 LINK lsvmd 00:04:32.979 LINK ioat_perf 00:04:32.979 LINK hello_world 00:04:32.979 LINK hello_sock 00:04:32.979 LINK hello_bdev 00:04:32.979 LINK hello_blob 00:04:33.238 CC examples/vmd/led/led.o 00:04:33.238 LINK nvmf 00:04:33.238 LINK spdk_trace 00:04:33.238 LINK dif 00:04:33.238 CC examples/ioat/verify/verify.o 00:04:33.238 LINK accel_perf 00:04:33.238 CC examples/nvme/nvme_manage/nvme_manage.o 00:04:33.238 CC examples/nvme/reconnect/reconnect.o 00:04:33.238 LINK led 00:04:33.238 CC examples/bdev/bdevperf/bdevperf.o 00:04:33.238 CC app/trace_record/trace_record.o 00:04:33.238 CC examples/blob/cli/blobcli.o 00:04:33.497 CC examples/util/zipf/zipf.o 00:04:33.497 LINK verify 00:04:33.497 CC test/app/bdev_svc/bdev_svc.o 00:04:33.497 CC test/bdev/bdevio/bdevio.o 00:04:33.497 CC examples/thread/thread/thread_ex.o 00:04:33.497 LINK zipf 00:04:33.497 LINK spdk_trace_record 00:04:33.497 LINK reconnect 00:04:33.755 LINK bdev_svc 00:04:33.755 LINK nvme_manage 00:04:33.755 CC examples/idxd/perf/perf.o 00:04:33.755 CC app/nvmf_tgt/nvmf_main.o 00:04:33.755 LINK blobcli 00:04:33.755 CC app/iscsi_tgt/iscsi_tgt.o 00:04:33.755 CC app/spdk_tgt/spdk_tgt.o 00:04:33.755 LINK thread 00:04:34.014 LINK bdevio 00:04:34.014 CC examples/nvme/arbitration/arbitration.o 00:04:34.014 LINK nvmf_tgt 00:04:34.014 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:04:34.014 LINK spdk_tgt 00:04:34.015 LINK iscsi_tgt 00:04:34.015 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:04:34.015 LINK bdevperf 00:04:34.015 CC test/app/histogram_perf/histogram_perf.o 00:04:34.015 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:04:34.015 LINK idxd_perf 00:04:34.273 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:04:34.273 LINK arbitration 00:04:34.273 LINK histogram_perf 00:04:34.273 CC app/spdk_lspci/spdk_lspci.o 00:04:34.273 CC app/spdk_nvme_perf/perf.o 00:04:34.273 CC app/spdk_nvme_identify/identify.o 00:04:34.273 LINK nvme_fuzz 00:04:34.273 CC app/spdk_nvme_discover/discovery_aer.o 00:04:34.273 CC examples/nvme/hotplug/hotplug.o 00:04:34.532 LINK spdk_lspci 00:04:34.532 CC app/spdk_top/spdk_top.o 00:04:34.532 LINK spdk_nvme_discover 00:04:34.533 LINK vhost_fuzz 00:04:34.533 CC app/vhost/vhost.o 00:04:34.533 LINK hotplug 00:04:34.533 CC test/app/jsoncat/jsoncat.o 00:04:34.792 CC test/app/stub/stub.o 00:04:34.792 LINK vhost 00:04:34.792 LINK jsoncat 00:04:34.792 CC examples/nvme/cmb_copy/cmb_copy.o 00:04:34.792 CC app/spdk_dd/spdk_dd.o 00:04:34.792 LINK stub 00:04:35.051 LINK cmb_copy 00:04:35.051 LINK spdk_nvme_identify 00:04:35.051 LINK spdk_nvme_perf 00:04:35.051 CC app/fio/nvme/fio_plugin.o 00:04:35.051 CC test/blobfs/mkfs/mkfs.o 00:04:35.051 LINK spdk_dd 00:04:35.051 CC examples/nvme/abort/abort.o 00:04:35.051 TEST_HEADER include/spdk/accel.h 00:04:35.051 TEST_HEADER include/spdk/accel_module.h 00:04:35.051 TEST_HEADER include/spdk/assert.h 00:04:35.051 CC app/fio/bdev/fio_plugin.o 00:04:35.051 TEST_HEADER include/spdk/barrier.h 00:04:35.051 TEST_HEADER include/spdk/base64.h 00:04:35.309 TEST_HEADER include/spdk/bdev.h 00:04:35.309 TEST_HEADER include/spdk/bdev_module.h 00:04:35.309 TEST_HEADER include/spdk/bdev_zone.h 00:04:35.309 TEST_HEADER include/spdk/bit_array.h 00:04:35.309 TEST_HEADER include/spdk/bit_pool.h 00:04:35.309 TEST_HEADER include/spdk/blob_bdev.h 00:04:35.309 TEST_HEADER include/spdk/blobfs_bdev.h 00:04:35.309 TEST_HEADER include/spdk/blobfs.h 00:04:35.309 TEST_HEADER include/spdk/blob.h 00:04:35.309 TEST_HEADER include/spdk/conf.h 00:04:35.309 TEST_HEADER include/spdk/config.h 00:04:35.309 TEST_HEADER include/spdk/cpuset.h 00:04:35.309 TEST_HEADER include/spdk/crc16.h 00:04:35.309 TEST_HEADER include/spdk/crc32.h 00:04:35.309 TEST_HEADER include/spdk/crc64.h 00:04:35.309 TEST_HEADER include/spdk/dif.h 00:04:35.309 TEST_HEADER include/spdk/dma.h 00:04:35.309 TEST_HEADER include/spdk/endian.h 00:04:35.309 TEST_HEADER include/spdk/env_dpdk.h 00:04:35.309 TEST_HEADER include/spdk/env.h 00:04:35.309 TEST_HEADER include/spdk/event.h 00:04:35.309 TEST_HEADER include/spdk/fd_group.h 00:04:35.309 TEST_HEADER include/spdk/fd.h 00:04:35.309 TEST_HEADER include/spdk/file.h 00:04:35.309 TEST_HEADER include/spdk/ftl.h 00:04:35.309 TEST_HEADER include/spdk/gpt_spec.h 00:04:35.309 TEST_HEADER include/spdk/hexlify.h 00:04:35.309 TEST_HEADER include/spdk/histogram_data.h 00:04:35.309 TEST_HEADER include/spdk/idxd.h 00:04:35.309 TEST_HEADER include/spdk/idxd_spec.h 00:04:35.309 TEST_HEADER include/spdk/init.h 00:04:35.309 TEST_HEADER include/spdk/ioat.h 00:04:35.309 LINK mkfs 00:04:35.309 TEST_HEADER include/spdk/ioat_spec.h 00:04:35.309 TEST_HEADER include/spdk/iscsi_spec.h 00:04:35.309 TEST_HEADER include/spdk/json.h 00:04:35.309 TEST_HEADER include/spdk/jsonrpc.h 00:04:35.309 LINK spdk_top 00:04:35.309 TEST_HEADER include/spdk/likely.h 00:04:35.309 TEST_HEADER include/spdk/log.h 00:04:35.309 TEST_HEADER include/spdk/lvol.h 00:04:35.309 TEST_HEADER include/spdk/memory.h 00:04:35.309 TEST_HEADER include/spdk/mmio.h 00:04:35.309 TEST_HEADER include/spdk/nbd.h 00:04:35.310 TEST_HEADER include/spdk/notify.h 00:04:35.310 TEST_HEADER include/spdk/nvme.h 00:04:35.310 TEST_HEADER include/spdk/nvme_intel.h 00:04:35.310 TEST_HEADER include/spdk/nvme_ocssd.h 00:04:35.310 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:04:35.310 TEST_HEADER include/spdk/nvme_spec.h 00:04:35.310 TEST_HEADER include/spdk/nvme_zns.h 00:04:35.310 TEST_HEADER include/spdk/nvmf_cmd.h 00:04:35.310 CC test/dma/test_dma/test_dma.o 00:04:35.310 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:04:35.310 TEST_HEADER include/spdk/nvmf.h 00:04:35.310 TEST_HEADER include/spdk/nvmf_spec.h 00:04:35.310 TEST_HEADER include/spdk/nvmf_transport.h 00:04:35.310 TEST_HEADER include/spdk/opal.h 00:04:35.310 TEST_HEADER include/spdk/opal_spec.h 00:04:35.310 TEST_HEADER include/spdk/pci_ids.h 00:04:35.310 TEST_HEADER include/spdk/pipe.h 00:04:35.310 TEST_HEADER include/spdk/queue.h 00:04:35.310 TEST_HEADER include/spdk/reduce.h 00:04:35.310 TEST_HEADER include/spdk/rpc.h 00:04:35.310 TEST_HEADER include/spdk/scheduler.h 00:04:35.310 TEST_HEADER include/spdk/scsi.h 00:04:35.310 TEST_HEADER include/spdk/scsi_spec.h 00:04:35.310 TEST_HEADER include/spdk/sock.h 00:04:35.310 TEST_HEADER include/spdk/stdinc.h 00:04:35.310 TEST_HEADER include/spdk/string.h 00:04:35.310 TEST_HEADER include/spdk/thread.h 00:04:35.310 TEST_HEADER include/spdk/trace.h 00:04:35.310 TEST_HEADER include/spdk/trace_parser.h 00:04:35.310 TEST_HEADER include/spdk/tree.h 00:04:35.310 TEST_HEADER include/spdk/ublk.h 00:04:35.310 TEST_HEADER include/spdk/util.h 00:04:35.310 TEST_HEADER include/spdk/uuid.h 00:04:35.310 TEST_HEADER include/spdk/version.h 00:04:35.310 TEST_HEADER include/spdk/vfio_user_pci.h 00:04:35.310 TEST_HEADER include/spdk/vfio_user_spec.h 00:04:35.310 TEST_HEADER include/spdk/vhost.h 00:04:35.568 TEST_HEADER include/spdk/vmd.h 00:04:35.568 TEST_HEADER include/spdk/xor.h 00:04:35.568 TEST_HEADER include/spdk/zipf.h 00:04:35.568 CXX test/cpp_headers/accel.o 00:04:35.568 CXX test/cpp_headers/accel_module.o 00:04:35.568 CXX test/cpp_headers/assert.o 00:04:35.568 CC examples/interrupt_tgt/interrupt_tgt.o 00:04:35.568 LINK abort 00:04:35.568 LINK spdk_nvme 00:04:35.568 CXX test/cpp_headers/barrier.o 00:04:35.568 CXX test/cpp_headers/base64.o 00:04:35.568 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:04:35.568 LINK interrupt_tgt 00:04:35.827 CXX test/cpp_headers/bdev.o 00:04:35.827 LINK spdk_bdev 00:04:35.827 LINK test_dma 00:04:35.827 LINK iscsi_fuzz 00:04:35.827 CXX test/cpp_headers/bdev_module.o 00:04:35.827 CXX test/cpp_headers/bdev_zone.o 00:04:35.827 LINK pmr_persistence 00:04:35.827 CXX test/cpp_headers/bit_array.o 00:04:35.827 CC test/env/mem_callbacks/mem_callbacks.o 00:04:35.827 CXX test/cpp_headers/bit_pool.o 00:04:35.827 CXX test/cpp_headers/blob_bdev.o 00:04:35.827 CC test/event/event_perf/event_perf.o 00:04:36.085 CXX test/cpp_headers/blobfs_bdev.o 00:04:36.085 CC test/lvol/esnap/esnap.o 00:04:36.085 CC test/rpc_client/rpc_client_test.o 00:04:36.085 LINK event_perf 00:04:36.085 CC test/env/vtophys/vtophys.o 00:04:36.085 CC test/event/reactor/reactor.o 00:04:36.085 CC test/nvme/aer/aer.o 00:04:36.085 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:04:36.344 CXX test/cpp_headers/blobfs.o 00:04:36.344 LINK reactor 00:04:36.344 LINK vtophys 00:04:36.344 LINK rpc_client_test 00:04:36.344 CC test/env/memory/memory_ut.o 00:04:36.344 LINK env_dpdk_post_init 00:04:36.344 CXX test/cpp_headers/blob.o 00:04:36.344 CXX test/cpp_headers/conf.o 00:04:36.344 LINK aer 00:04:36.344 CC test/env/pci/pci_ut.o 00:04:36.344 CC test/event/reactor_perf/reactor_perf.o 00:04:36.604 LINK mem_callbacks 00:04:36.604 CC test/nvme/reset/reset.o 00:04:36.604 CXX test/cpp_headers/config.o 00:04:36.604 CXX test/cpp_headers/cpuset.o 00:04:36.604 CXX test/cpp_headers/crc16.o 00:04:36.604 LINK reactor_perf 00:04:36.604 CXX test/cpp_headers/crc32.o 00:04:36.604 CXX test/cpp_headers/crc64.o 00:04:36.862 CXX test/cpp_headers/dif.o 00:04:36.862 LINK reset 00:04:36.862 CC test/nvme/sgl/sgl.o 00:04:36.862 CC test/event/app_repeat/app_repeat.o 00:04:36.862 LINK pci_ut 00:04:36.862 CC test/nvme/e2edp/nvme_dp.o 00:04:36.862 CC test/event/scheduler/scheduler.o 00:04:36.862 CXX test/cpp_headers/dma.o 00:04:37.122 CXX test/cpp_headers/endian.o 00:04:37.122 LINK app_repeat 00:04:37.122 CXX test/cpp_headers/env_dpdk.o 00:04:37.122 LINK memory_ut 00:04:37.122 CC test/nvme/overhead/overhead.o 00:04:37.122 LINK scheduler 00:04:37.380 CXX test/cpp_headers/env.o 00:04:37.380 CC test/thread/poller_perf/poller_perf.o 00:04:37.380 LINK sgl 00:04:37.380 LINK nvme_dp 00:04:37.380 CXX test/cpp_headers/event.o 00:04:37.380 CXX test/cpp_headers/fd_group.o 00:04:37.380 LINK poller_perf 00:04:37.380 CXX test/cpp_headers/fd.o 00:04:37.380 CXX test/cpp_headers/file.o 00:04:37.380 CC test/nvme/err_injection/err_injection.o 00:04:37.380 LINK overhead 00:04:37.639 CC test/nvme/startup/startup.o 00:04:37.639 CXX test/cpp_headers/ftl.o 00:04:37.639 CC test/nvme/reserve/reserve.o 00:04:37.639 CC test/nvme/simple_copy/simple_copy.o 00:04:37.639 CC test/nvme/connect_stress/connect_stress.o 00:04:37.639 LINK err_injection 00:04:37.639 CC test/nvme/boot_partition/boot_partition.o 00:04:37.639 CC test/nvme/compliance/nvme_compliance.o 00:04:37.639 LINK startup 00:04:37.639 CXX test/cpp_headers/gpt_spec.o 00:04:37.898 LINK reserve 00:04:37.898 LINK connect_stress 00:04:37.898 LINK boot_partition 00:04:37.898 LINK simple_copy 00:04:37.898 CC test/nvme/fused_ordering/fused_ordering.o 00:04:37.898 CXX test/cpp_headers/hexlify.o 00:04:37.898 CC test/nvme/doorbell_aers/doorbell_aers.o 00:04:37.898 CC test/nvme/fdp/fdp.o 00:04:38.157 CXX test/cpp_headers/histogram_data.o 00:04:38.157 CXX test/cpp_headers/idxd.o 00:04:38.157 LINK nvme_compliance 00:04:38.157 CC test/nvme/cuse/cuse.o 00:04:38.157 CXX test/cpp_headers/idxd_spec.o 00:04:38.157 LINK fused_ordering 00:04:38.157 LINK doorbell_aers 00:04:38.415 CXX test/cpp_headers/init.o 00:04:38.415 CXX test/cpp_headers/ioat.o 00:04:38.415 CXX test/cpp_headers/ioat_spec.o 00:04:38.415 CXX test/cpp_headers/iscsi_spec.o 00:04:38.415 CXX test/cpp_headers/json.o 00:04:38.415 CXX test/cpp_headers/jsonrpc.o 00:04:38.415 CXX test/cpp_headers/likely.o 00:04:38.415 CXX test/cpp_headers/log.o 00:04:38.415 LINK fdp 00:04:38.415 CXX test/cpp_headers/lvol.o 00:04:38.674 CXX test/cpp_headers/memory.o 00:04:38.674 CXX test/cpp_headers/mmio.o 00:04:38.674 CXX test/cpp_headers/nbd.o 00:04:38.674 CXX test/cpp_headers/notify.o 00:04:38.674 CXX test/cpp_headers/nvme.o 00:04:38.674 CXX test/cpp_headers/nvme_intel.o 00:04:38.674 CXX test/cpp_headers/nvme_ocssd.o 00:04:38.674 CXX test/cpp_headers/nvme_ocssd_spec.o 00:04:38.674 CXX test/cpp_headers/nvme_spec.o 00:04:38.934 CXX test/cpp_headers/nvme_zns.o 00:04:38.934 CXX test/cpp_headers/nvmf_cmd.o 00:04:38.934 CXX test/cpp_headers/nvmf_fc_spec.o 00:04:38.934 CXX test/cpp_headers/nvmf.o 00:04:38.934 CXX test/cpp_headers/nvmf_spec.o 00:04:38.934 CXX test/cpp_headers/nvmf_transport.o 00:04:38.934 CXX test/cpp_headers/opal.o 00:04:38.934 CXX test/cpp_headers/opal_spec.o 00:04:39.193 CXX test/cpp_headers/pci_ids.o 00:04:39.193 CXX test/cpp_headers/pipe.o 00:04:39.193 CXX test/cpp_headers/queue.o 00:04:39.193 CXX test/cpp_headers/reduce.o 00:04:39.193 CXX test/cpp_headers/rpc.o 00:04:39.193 CXX test/cpp_headers/scheduler.o 00:04:39.193 CXX test/cpp_headers/scsi.o 00:04:39.193 CXX test/cpp_headers/scsi_spec.o 00:04:39.193 CXX test/cpp_headers/sock.o 00:04:39.193 CXX test/cpp_headers/stdinc.o 00:04:39.452 CXX test/cpp_headers/string.o 00:04:39.452 CXX test/cpp_headers/thread.o 00:04:39.452 CXX test/cpp_headers/trace.o 00:04:39.452 LINK cuse 00:04:39.452 CXX test/cpp_headers/trace_parser.o 00:04:39.452 CXX test/cpp_headers/tree.o 00:04:39.452 CXX test/cpp_headers/ublk.o 00:04:39.452 CXX test/cpp_headers/util.o 00:04:39.452 CXX test/cpp_headers/uuid.o 00:04:39.452 CXX test/cpp_headers/version.o 00:04:39.452 CXX test/cpp_headers/vfio_user_pci.o 00:04:39.711 CXX test/cpp_headers/vfio_user_spec.o 00:04:39.711 CXX test/cpp_headers/vhost.o 00:04:39.711 CXX test/cpp_headers/vmd.o 00:04:39.711 CXX test/cpp_headers/xor.o 00:04:39.711 CXX test/cpp_headers/zipf.o 00:04:40.646 LINK esnap 00:04:40.905 00:04:40.905 real 0m48.064s 00:04:40.905 user 4m37.579s 00:04:40.905 sys 1m3.433s 00:04:40.905 14:45:13 -- common/autotest_common.sh@1115 -- $ xtrace_disable 00:04:40.905 ************************************ 00:04:40.905 END TEST make 00:04:40.905 ************************************ 00:04:40.905 14:45:13 -- common/autotest_common.sh@10 -- $ set +x 00:04:41.165 14:45:14 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:04:41.165 14:45:14 -- common/autotest_common.sh@1690 -- # lcov --version 00:04:41.165 14:45:14 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:04:41.165 14:45:14 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:04:41.165 14:45:14 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:04:41.165 14:45:14 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:04:41.165 14:45:14 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:04:41.165 14:45:14 -- scripts/common.sh@335 -- # IFS=.-: 00:04:41.165 14:45:14 -- scripts/common.sh@335 -- # read -ra ver1 00:04:41.165 14:45:14 -- scripts/common.sh@336 -- # IFS=.-: 00:04:41.165 14:45:14 -- scripts/common.sh@336 -- # read -ra ver2 00:04:41.165 14:45:14 -- scripts/common.sh@337 -- # local 'op=<' 00:04:41.165 14:45:14 -- scripts/common.sh@339 -- # ver1_l=2 00:04:41.165 14:45:14 -- scripts/common.sh@340 -- # ver2_l=1 00:04:41.165 14:45:14 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:04:41.165 14:45:14 -- scripts/common.sh@343 -- # case "$op" in 00:04:41.165 14:45:14 -- scripts/common.sh@344 -- # : 1 00:04:41.165 14:45:14 -- scripts/common.sh@363 -- # (( v = 0 )) 00:04:41.165 14:45:14 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:41.165 14:45:14 -- scripts/common.sh@364 -- # decimal 1 00:04:41.165 14:45:14 -- scripts/common.sh@352 -- # local d=1 00:04:41.165 14:45:14 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:41.165 14:45:14 -- scripts/common.sh@354 -- # echo 1 00:04:41.165 14:45:14 -- scripts/common.sh@364 -- # ver1[v]=1 00:04:41.165 14:45:14 -- scripts/common.sh@365 -- # decimal 2 00:04:41.165 14:45:14 -- scripts/common.sh@352 -- # local d=2 00:04:41.165 14:45:14 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:41.165 14:45:14 -- scripts/common.sh@354 -- # echo 2 00:04:41.165 14:45:14 -- scripts/common.sh@365 -- # ver2[v]=2 00:04:41.165 14:45:14 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:04:41.165 14:45:14 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:04:41.165 14:45:14 -- scripts/common.sh@367 -- # return 0 00:04:41.165 14:45:14 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:41.165 14:45:14 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:04:41.165 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:41.165 --rc genhtml_branch_coverage=1 00:04:41.165 --rc genhtml_function_coverage=1 00:04:41.165 --rc genhtml_legend=1 00:04:41.165 --rc geninfo_all_blocks=1 00:04:41.165 --rc geninfo_unexecuted_blocks=1 00:04:41.165 00:04:41.165 ' 00:04:41.165 14:45:14 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:04:41.165 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:41.165 --rc genhtml_branch_coverage=1 00:04:41.165 --rc genhtml_function_coverage=1 00:04:41.165 --rc genhtml_legend=1 00:04:41.165 --rc geninfo_all_blocks=1 00:04:41.165 --rc geninfo_unexecuted_blocks=1 00:04:41.165 00:04:41.165 ' 00:04:41.165 14:45:14 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:04:41.165 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:41.165 --rc genhtml_branch_coverage=1 00:04:41.165 --rc genhtml_function_coverage=1 00:04:41.165 --rc genhtml_legend=1 00:04:41.166 --rc geninfo_all_blocks=1 00:04:41.166 --rc geninfo_unexecuted_blocks=1 00:04:41.166 00:04:41.166 ' 00:04:41.166 14:45:14 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:04:41.166 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:41.166 --rc genhtml_branch_coverage=1 00:04:41.166 --rc genhtml_function_coverage=1 00:04:41.166 --rc genhtml_legend=1 00:04:41.166 --rc geninfo_all_blocks=1 00:04:41.166 --rc geninfo_unexecuted_blocks=1 00:04:41.166 00:04:41.166 ' 00:04:41.166 14:45:14 -- spdk/autotest.sh@25 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:04:41.166 14:45:14 -- nvmf/common.sh@7 -- # uname -s 00:04:41.166 14:45:14 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:41.166 14:45:14 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:41.166 14:45:14 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:41.166 14:45:14 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:41.166 14:45:14 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:41.166 14:45:14 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:41.166 14:45:14 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:41.166 14:45:14 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:41.166 14:45:14 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:41.166 14:45:14 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:41.166 14:45:14 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:2d843004-a791-47f3-8dd7-3d04462c368b 00:04:41.166 14:45:14 -- nvmf/common.sh@18 -- # NVME_HOSTID=2d843004-a791-47f3-8dd7-3d04462c368b 00:04:41.166 14:45:14 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:41.166 14:45:14 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:41.166 14:45:14 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:04:41.166 14:45:14 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:04:41.166 14:45:14 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:41.166 14:45:14 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:41.166 14:45:14 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:41.166 14:45:14 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:41.166 14:45:14 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:41.166 14:45:14 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:41.166 14:45:14 -- paths/export.sh@5 -- # export PATH 00:04:41.166 14:45:14 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:41.166 14:45:14 -- nvmf/common.sh@46 -- # : 0 00:04:41.166 14:45:14 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:04:41.166 14:45:14 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:04:41.166 14:45:14 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:04:41.166 14:45:14 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:41.166 14:45:14 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:41.166 14:45:14 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:04:41.166 14:45:14 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:04:41.166 14:45:14 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:04:41.166 14:45:14 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:04:41.166 14:45:14 -- spdk/autotest.sh@32 -- # uname -s 00:04:41.166 14:45:14 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:04:41.166 14:45:14 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:04:41.166 14:45:14 -- spdk/autotest.sh@34 -- # mkdir -p /home/vagrant/spdk_repo/spdk/../output/coredumps 00:04:41.166 14:45:14 -- spdk/autotest.sh@39 -- # echo '|/home/vagrant/spdk_repo/spdk/scripts/core-collector.sh %P %s %t' 00:04:41.166 14:45:14 -- spdk/autotest.sh@40 -- # echo /home/vagrant/spdk_repo/spdk/../output/coredumps 00:04:41.166 14:45:14 -- spdk/autotest.sh@44 -- # modprobe nbd 00:04:41.166 14:45:14 -- spdk/autotest.sh@46 -- # type -P udevadm 00:04:41.166 14:45:14 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:04:41.166 14:45:14 -- spdk/autotest.sh@48 -- # udevadm_pid=61837 00:04:41.166 14:45:14 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:04:41.166 14:45:14 -- spdk/autotest.sh@51 -- # mkdir -p /home/vagrant/spdk_repo/spdk/../output/power 00:04:41.166 14:45:14 -- spdk/autotest.sh@54 -- # echo 61846 00:04:41.166 14:45:14 -- spdk/autotest.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power 00:04:41.166 14:45:14 -- spdk/autotest.sh@56 -- # echo 61847 00:04:41.166 14:45:14 -- spdk/autotest.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power 00:04:41.166 14:45:14 -- spdk/autotest.sh@58 -- # [[ QEMU != QEMU ]] 00:04:41.166 14:45:14 -- spdk/autotest.sh@66 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:04:41.166 14:45:14 -- spdk/autotest.sh@68 -- # timing_enter autotest 00:04:41.166 14:45:14 -- common/autotest_common.sh@722 -- # xtrace_disable 00:04:41.166 14:45:14 -- common/autotest_common.sh@10 -- # set +x 00:04:41.166 14:45:14 -- spdk/autotest.sh@70 -- # create_test_list 00:04:41.166 14:45:14 -- common/autotest_common.sh@746 -- # xtrace_disable 00:04:41.166 14:45:14 -- common/autotest_common.sh@10 -- # set +x 00:04:41.425 14:45:14 -- spdk/autotest.sh@72 -- # dirname /home/vagrant/spdk_repo/spdk/autotest.sh 00:04:41.425 14:45:14 -- spdk/autotest.sh@72 -- # readlink -f /home/vagrant/spdk_repo/spdk 00:04:41.425 14:45:14 -- spdk/autotest.sh@72 -- # src=/home/vagrant/spdk_repo/spdk 00:04:41.425 14:45:14 -- spdk/autotest.sh@73 -- # out=/home/vagrant/spdk_repo/spdk/../output 00:04:41.425 14:45:14 -- spdk/autotest.sh@74 -- # cd /home/vagrant/spdk_repo/spdk 00:04:41.425 14:45:14 -- spdk/autotest.sh@76 -- # freebsd_update_contigmem_mod 00:04:41.425 14:45:14 -- common/autotest_common.sh@1450 -- # uname 00:04:41.425 14:45:14 -- common/autotest_common.sh@1450 -- # '[' Linux = FreeBSD ']' 00:04:41.425 14:45:14 -- spdk/autotest.sh@77 -- # freebsd_set_maxsock_buf 00:04:41.425 14:45:14 -- common/autotest_common.sh@1470 -- # uname 00:04:41.425 14:45:14 -- common/autotest_common.sh@1470 -- # [[ Linux = FreeBSD ]] 00:04:41.425 14:45:14 -- spdk/autotest.sh@79 -- # [[ y == y ]] 00:04:41.425 14:45:14 -- spdk/autotest.sh@81 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --version 00:04:41.425 lcov: LCOV version 1.15 00:04:41.426 14:45:14 -- spdk/autotest.sh@83 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -i -t Baseline -d /home/vagrant/spdk_repo/spdk -o /home/vagrant/spdk_repo/spdk/../output/cov_base.info 00:04:47.985 /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_p2l_upgrade.gcno:no functions found 00:04:47.985 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_p2l_upgrade.gcno 00:04:47.985 /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_band_upgrade.gcno:no functions found 00:04:47.985 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_band_upgrade.gcno 00:04:47.985 /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_chunk_upgrade.gcno:no functions found 00:04:47.985 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_chunk_upgrade.gcno 00:05:06.073 14:45:38 -- spdk/autotest.sh@87 -- # timing_enter pre_cleanup 00:05:06.073 14:45:38 -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:06.073 14:45:38 -- common/autotest_common.sh@10 -- # set +x 00:05:06.073 14:45:38 -- spdk/autotest.sh@89 -- # rm -f 00:05:06.073 14:45:38 -- spdk/autotest.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:05:06.073 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:06.073 0000:00:06.0 (1b36 0010): Already using the nvme driver 00:05:06.073 0000:00:07.0 (1b36 0010): Already using the nvme driver 00:05:06.073 14:45:39 -- spdk/autotest.sh@94 -- # get_zoned_devs 00:05:06.073 14:45:39 -- common/autotest_common.sh@1664 -- # zoned_devs=() 00:05:06.073 14:45:39 -- common/autotest_common.sh@1664 -- # local -gA zoned_devs 00:05:06.073 14:45:39 -- common/autotest_common.sh@1665 -- # local nvme bdf 00:05:06.073 14:45:39 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:05:06.073 14:45:39 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme0n1 00:05:06.073 14:45:39 -- common/autotest_common.sh@1657 -- # local device=nvme0n1 00:05:06.073 14:45:39 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:05:06.073 14:45:39 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:05:06.073 14:45:39 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:05:06.073 14:45:39 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme1n1 00:05:06.073 14:45:39 -- common/autotest_common.sh@1657 -- # local device=nvme1n1 00:05:06.073 14:45:39 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:05:06.073 14:45:39 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:05:06.073 14:45:39 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:05:06.073 14:45:39 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme1n2 00:05:06.073 14:45:39 -- common/autotest_common.sh@1657 -- # local device=nvme1n2 00:05:06.073 14:45:39 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme1n2/queue/zoned ]] 00:05:06.073 14:45:39 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:05:06.073 14:45:39 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:05:06.073 14:45:39 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme1n3 00:05:06.073 14:45:39 -- common/autotest_common.sh@1657 -- # local device=nvme1n3 00:05:06.073 14:45:39 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme1n3/queue/zoned ]] 00:05:06.073 14:45:39 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:05:06.073 14:45:39 -- spdk/autotest.sh@96 -- # (( 0 > 0 )) 00:05:06.073 14:45:39 -- spdk/autotest.sh@108 -- # ls /dev/nvme0n1 /dev/nvme1n1 /dev/nvme1n2 /dev/nvme1n3 00:05:06.073 14:45:39 -- spdk/autotest.sh@108 -- # grep -v p 00:05:06.073 14:45:39 -- spdk/autotest.sh@108 -- # for dev in $(ls /dev/nvme*n* | grep -v p || true) 00:05:06.073 14:45:39 -- spdk/autotest.sh@110 -- # [[ -z '' ]] 00:05:06.073 14:45:39 -- spdk/autotest.sh@111 -- # block_in_use /dev/nvme0n1 00:05:06.073 14:45:39 -- scripts/common.sh@380 -- # local block=/dev/nvme0n1 pt 00:05:06.073 14:45:39 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:05:06.073 No valid GPT data, bailing 00:05:06.073 14:45:39 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:05:06.073 14:45:39 -- scripts/common.sh@393 -- # pt= 00:05:06.073 14:45:39 -- scripts/common.sh@394 -- # return 1 00:05:06.073 14:45:39 -- spdk/autotest.sh@112 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:05:06.073 1+0 records in 00:05:06.073 1+0 records out 00:05:06.073 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00456746 s, 230 MB/s 00:05:06.073 14:45:39 -- spdk/autotest.sh@108 -- # for dev in $(ls /dev/nvme*n* | grep -v p || true) 00:05:06.073 14:45:39 -- spdk/autotest.sh@110 -- # [[ -z '' ]] 00:05:06.073 14:45:39 -- spdk/autotest.sh@111 -- # block_in_use /dev/nvme1n1 00:05:06.073 14:45:39 -- scripts/common.sh@380 -- # local block=/dev/nvme1n1 pt 00:05:06.073 14:45:39 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n1 00:05:06.073 No valid GPT data, bailing 00:05:06.332 14:45:39 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:05:06.332 14:45:39 -- scripts/common.sh@393 -- # pt= 00:05:06.332 14:45:39 -- scripts/common.sh@394 -- # return 1 00:05:06.332 14:45:39 -- spdk/autotest.sh@112 -- # dd if=/dev/zero of=/dev/nvme1n1 bs=1M count=1 00:05:06.332 1+0 records in 00:05:06.332 1+0 records out 00:05:06.332 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00474495 s, 221 MB/s 00:05:06.332 14:45:39 -- spdk/autotest.sh@108 -- # for dev in $(ls /dev/nvme*n* | grep -v p || true) 00:05:06.332 14:45:39 -- spdk/autotest.sh@110 -- # [[ -z '' ]] 00:05:06.332 14:45:39 -- spdk/autotest.sh@111 -- # block_in_use /dev/nvme1n2 00:05:06.332 14:45:39 -- scripts/common.sh@380 -- # local block=/dev/nvme1n2 pt 00:05:06.332 14:45:39 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n2 00:05:06.332 No valid GPT data, bailing 00:05:06.332 14:45:39 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme1n2 00:05:06.332 14:45:39 -- scripts/common.sh@393 -- # pt= 00:05:06.332 14:45:39 -- scripts/common.sh@394 -- # return 1 00:05:06.332 14:45:39 -- spdk/autotest.sh@112 -- # dd if=/dev/zero of=/dev/nvme1n2 bs=1M count=1 00:05:06.332 1+0 records in 00:05:06.332 1+0 records out 00:05:06.332 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00469829 s, 223 MB/s 00:05:06.332 14:45:39 -- spdk/autotest.sh@108 -- # for dev in $(ls /dev/nvme*n* | grep -v p || true) 00:05:06.332 14:45:39 -- spdk/autotest.sh@110 -- # [[ -z '' ]] 00:05:06.332 14:45:39 -- spdk/autotest.sh@111 -- # block_in_use /dev/nvme1n3 00:05:06.332 14:45:39 -- scripts/common.sh@380 -- # local block=/dev/nvme1n3 pt 00:05:06.332 14:45:39 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n3 00:05:06.332 No valid GPT data, bailing 00:05:06.332 14:45:39 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme1n3 00:05:06.332 14:45:39 -- scripts/common.sh@393 -- # pt= 00:05:06.332 14:45:39 -- scripts/common.sh@394 -- # return 1 00:05:06.332 14:45:39 -- spdk/autotest.sh@112 -- # dd if=/dev/zero of=/dev/nvme1n3 bs=1M count=1 00:05:06.332 1+0 records in 00:05:06.332 1+0 records out 00:05:06.332 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00394367 s, 266 MB/s 00:05:06.332 14:45:39 -- spdk/autotest.sh@116 -- # sync 00:05:06.895 14:45:39 -- spdk/autotest.sh@118 -- # xtrace_disable_per_cmd reap_spdk_processes 00:05:06.895 14:45:39 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:05:06.895 14:45:39 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:05:08.795 14:45:41 -- spdk/autotest.sh@122 -- # uname -s 00:05:08.795 14:45:41 -- spdk/autotest.sh@122 -- # '[' Linux = Linux ']' 00:05:08.795 14:45:41 -- spdk/autotest.sh@123 -- # run_test setup.sh /home/vagrant/spdk_repo/spdk/test/setup/test-setup.sh 00:05:08.795 14:45:41 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:08.795 14:45:41 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:08.795 14:45:41 -- common/autotest_common.sh@10 -- # set +x 00:05:08.795 ************************************ 00:05:08.795 START TEST setup.sh 00:05:08.795 ************************************ 00:05:08.795 14:45:41 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/setup/test-setup.sh 00:05:09.053 * Looking for test storage... 00:05:09.054 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:05:09.054 14:45:41 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:05:09.054 14:45:41 -- common/autotest_common.sh@1690 -- # lcov --version 00:05:09.054 14:45:41 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:05:09.054 14:45:42 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:05:09.054 14:45:42 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:05:09.054 14:45:42 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:05:09.054 14:45:42 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:05:09.054 14:45:42 -- scripts/common.sh@335 -- # IFS=.-: 00:05:09.054 14:45:42 -- scripts/common.sh@335 -- # read -ra ver1 00:05:09.054 14:45:42 -- scripts/common.sh@336 -- # IFS=.-: 00:05:09.054 14:45:42 -- scripts/common.sh@336 -- # read -ra ver2 00:05:09.054 14:45:42 -- scripts/common.sh@337 -- # local 'op=<' 00:05:09.054 14:45:42 -- scripts/common.sh@339 -- # ver1_l=2 00:05:09.054 14:45:42 -- scripts/common.sh@340 -- # ver2_l=1 00:05:09.054 14:45:42 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:05:09.054 14:45:42 -- scripts/common.sh@343 -- # case "$op" in 00:05:09.054 14:45:42 -- scripts/common.sh@344 -- # : 1 00:05:09.054 14:45:42 -- scripts/common.sh@363 -- # (( v = 0 )) 00:05:09.054 14:45:42 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:09.054 14:45:42 -- scripts/common.sh@364 -- # decimal 1 00:05:09.054 14:45:42 -- scripts/common.sh@352 -- # local d=1 00:05:09.054 14:45:42 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:09.054 14:45:42 -- scripts/common.sh@354 -- # echo 1 00:05:09.054 14:45:42 -- scripts/common.sh@364 -- # ver1[v]=1 00:05:09.054 14:45:42 -- scripts/common.sh@365 -- # decimal 2 00:05:09.054 14:45:42 -- scripts/common.sh@352 -- # local d=2 00:05:09.054 14:45:42 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:09.054 14:45:42 -- scripts/common.sh@354 -- # echo 2 00:05:09.054 14:45:42 -- scripts/common.sh@365 -- # ver2[v]=2 00:05:09.054 14:45:42 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:05:09.054 14:45:42 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:05:09.054 14:45:42 -- scripts/common.sh@367 -- # return 0 00:05:09.054 14:45:42 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:09.054 14:45:42 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:05:09.054 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:09.054 --rc genhtml_branch_coverage=1 00:05:09.054 --rc genhtml_function_coverage=1 00:05:09.054 --rc genhtml_legend=1 00:05:09.054 --rc geninfo_all_blocks=1 00:05:09.054 --rc geninfo_unexecuted_blocks=1 00:05:09.054 00:05:09.054 ' 00:05:09.054 14:45:42 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:05:09.054 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:09.054 --rc genhtml_branch_coverage=1 00:05:09.054 --rc genhtml_function_coverage=1 00:05:09.054 --rc genhtml_legend=1 00:05:09.054 --rc geninfo_all_blocks=1 00:05:09.054 --rc geninfo_unexecuted_blocks=1 00:05:09.054 00:05:09.054 ' 00:05:09.054 14:45:42 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:05:09.054 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:09.054 --rc genhtml_branch_coverage=1 00:05:09.054 --rc genhtml_function_coverage=1 00:05:09.054 --rc genhtml_legend=1 00:05:09.054 --rc geninfo_all_blocks=1 00:05:09.054 --rc geninfo_unexecuted_blocks=1 00:05:09.054 00:05:09.054 ' 00:05:09.054 14:45:42 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:05:09.054 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:09.054 --rc genhtml_branch_coverage=1 00:05:09.054 --rc genhtml_function_coverage=1 00:05:09.054 --rc genhtml_legend=1 00:05:09.054 --rc geninfo_all_blocks=1 00:05:09.054 --rc geninfo_unexecuted_blocks=1 00:05:09.054 00:05:09.054 ' 00:05:09.054 14:45:42 -- setup/test-setup.sh@10 -- # uname -s 00:05:09.054 14:45:42 -- setup/test-setup.sh@10 -- # [[ Linux == Linux ]] 00:05:09.054 14:45:42 -- setup/test-setup.sh@12 -- # run_test acl /home/vagrant/spdk_repo/spdk/test/setup/acl.sh 00:05:09.054 14:45:42 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:09.054 14:45:42 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:09.054 14:45:42 -- common/autotest_common.sh@10 -- # set +x 00:05:09.054 ************************************ 00:05:09.054 START TEST acl 00:05:09.054 ************************************ 00:05:09.054 14:45:42 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/setup/acl.sh 00:05:09.054 * Looking for test storage... 00:05:09.054 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:05:09.054 14:45:42 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:05:09.054 14:45:42 -- common/autotest_common.sh@1690 -- # lcov --version 00:05:09.054 14:45:42 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:05:09.313 14:45:42 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:05:09.314 14:45:42 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:05:09.314 14:45:42 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:05:09.314 14:45:42 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:05:09.314 14:45:42 -- scripts/common.sh@335 -- # IFS=.-: 00:05:09.314 14:45:42 -- scripts/common.sh@335 -- # read -ra ver1 00:05:09.314 14:45:42 -- scripts/common.sh@336 -- # IFS=.-: 00:05:09.314 14:45:42 -- scripts/common.sh@336 -- # read -ra ver2 00:05:09.314 14:45:42 -- scripts/common.sh@337 -- # local 'op=<' 00:05:09.314 14:45:42 -- scripts/common.sh@339 -- # ver1_l=2 00:05:09.314 14:45:42 -- scripts/common.sh@340 -- # ver2_l=1 00:05:09.314 14:45:42 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:05:09.314 14:45:42 -- scripts/common.sh@343 -- # case "$op" in 00:05:09.314 14:45:42 -- scripts/common.sh@344 -- # : 1 00:05:09.314 14:45:42 -- scripts/common.sh@363 -- # (( v = 0 )) 00:05:09.314 14:45:42 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:09.314 14:45:42 -- scripts/common.sh@364 -- # decimal 1 00:05:09.314 14:45:42 -- scripts/common.sh@352 -- # local d=1 00:05:09.314 14:45:42 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:09.314 14:45:42 -- scripts/common.sh@354 -- # echo 1 00:05:09.314 14:45:42 -- scripts/common.sh@364 -- # ver1[v]=1 00:05:09.314 14:45:42 -- scripts/common.sh@365 -- # decimal 2 00:05:09.314 14:45:42 -- scripts/common.sh@352 -- # local d=2 00:05:09.314 14:45:42 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:09.314 14:45:42 -- scripts/common.sh@354 -- # echo 2 00:05:09.314 14:45:42 -- scripts/common.sh@365 -- # ver2[v]=2 00:05:09.314 14:45:42 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:05:09.314 14:45:42 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:05:09.314 14:45:42 -- scripts/common.sh@367 -- # return 0 00:05:09.314 14:45:42 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:09.314 14:45:42 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:05:09.314 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:09.314 --rc genhtml_branch_coverage=1 00:05:09.314 --rc genhtml_function_coverage=1 00:05:09.314 --rc genhtml_legend=1 00:05:09.314 --rc geninfo_all_blocks=1 00:05:09.314 --rc geninfo_unexecuted_blocks=1 00:05:09.314 00:05:09.314 ' 00:05:09.314 14:45:42 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:05:09.314 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:09.314 --rc genhtml_branch_coverage=1 00:05:09.314 --rc genhtml_function_coverage=1 00:05:09.314 --rc genhtml_legend=1 00:05:09.314 --rc geninfo_all_blocks=1 00:05:09.314 --rc geninfo_unexecuted_blocks=1 00:05:09.314 00:05:09.314 ' 00:05:09.314 14:45:42 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:05:09.314 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:09.314 --rc genhtml_branch_coverage=1 00:05:09.314 --rc genhtml_function_coverage=1 00:05:09.314 --rc genhtml_legend=1 00:05:09.314 --rc geninfo_all_blocks=1 00:05:09.314 --rc geninfo_unexecuted_blocks=1 00:05:09.314 00:05:09.314 ' 00:05:09.314 14:45:42 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:05:09.314 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:09.314 --rc genhtml_branch_coverage=1 00:05:09.314 --rc genhtml_function_coverage=1 00:05:09.314 --rc genhtml_legend=1 00:05:09.314 --rc geninfo_all_blocks=1 00:05:09.314 --rc geninfo_unexecuted_blocks=1 00:05:09.314 00:05:09.314 ' 00:05:09.314 14:45:42 -- setup/acl.sh@10 -- # get_zoned_devs 00:05:09.314 14:45:42 -- common/autotest_common.sh@1664 -- # zoned_devs=() 00:05:09.314 14:45:42 -- common/autotest_common.sh@1664 -- # local -gA zoned_devs 00:05:09.314 14:45:42 -- common/autotest_common.sh@1665 -- # local nvme bdf 00:05:09.314 14:45:42 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:05:09.314 14:45:42 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme0n1 00:05:09.314 14:45:42 -- common/autotest_common.sh@1657 -- # local device=nvme0n1 00:05:09.314 14:45:42 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:05:09.314 14:45:42 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:05:09.314 14:45:42 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:05:09.314 14:45:42 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme1n1 00:05:09.314 14:45:42 -- common/autotest_common.sh@1657 -- # local device=nvme1n1 00:05:09.314 14:45:42 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:05:09.314 14:45:42 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:05:09.314 14:45:42 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:05:09.314 14:45:42 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme1n2 00:05:09.314 14:45:42 -- common/autotest_common.sh@1657 -- # local device=nvme1n2 00:05:09.314 14:45:42 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme1n2/queue/zoned ]] 00:05:09.314 14:45:42 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:05:09.314 14:45:42 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:05:09.314 14:45:42 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme1n3 00:05:09.314 14:45:42 -- common/autotest_common.sh@1657 -- # local device=nvme1n3 00:05:09.314 14:45:42 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme1n3/queue/zoned ]] 00:05:09.314 14:45:42 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:05:09.314 14:45:42 -- setup/acl.sh@12 -- # devs=() 00:05:09.314 14:45:42 -- setup/acl.sh@12 -- # declare -a devs 00:05:09.314 14:45:42 -- setup/acl.sh@13 -- # drivers=() 00:05:09.314 14:45:42 -- setup/acl.sh@13 -- # declare -A drivers 00:05:09.314 14:45:42 -- setup/acl.sh@51 -- # setup reset 00:05:09.314 14:45:42 -- setup/common.sh@9 -- # [[ reset == output ]] 00:05:09.314 14:45:42 -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:05:10.251 14:45:43 -- setup/acl.sh@52 -- # collect_setup_devs 00:05:10.251 14:45:43 -- setup/acl.sh@16 -- # local dev driver 00:05:10.251 14:45:43 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:05:10.251 14:45:43 -- setup/acl.sh@15 -- # setup output status 00:05:10.251 14:45:43 -- setup/common.sh@9 -- # [[ output == output ]] 00:05:10.251 14:45:43 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:05:10.251 Hugepages 00:05:10.251 node hugesize free / total 00:05:10.251 14:45:43 -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:05:10.251 14:45:43 -- setup/acl.sh@19 -- # continue 00:05:10.251 14:45:43 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:05:10.251 00:05:10.251 Type BDF Vendor Device NUMA Driver Device Block devices 00:05:10.251 14:45:43 -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:05:10.251 14:45:43 -- setup/acl.sh@19 -- # continue 00:05:10.251 14:45:43 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:05:10.251 14:45:43 -- setup/acl.sh@19 -- # [[ 0000:00:03.0 == *:*:*.* ]] 00:05:10.251 14:45:43 -- setup/acl.sh@20 -- # [[ virtio-pci == nvme ]] 00:05:10.251 14:45:43 -- setup/acl.sh@20 -- # continue 00:05:10.251 14:45:43 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:05:10.510 14:45:43 -- setup/acl.sh@19 -- # [[ 0000:00:06.0 == *:*:*.* ]] 00:05:10.510 14:45:43 -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:05:10.510 14:45:43 -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\0\0\:\0\6\.\0* ]] 00:05:10.510 14:45:43 -- setup/acl.sh@22 -- # devs+=("$dev") 00:05:10.510 14:45:43 -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:05:10.510 14:45:43 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:05:10.510 14:45:43 -- setup/acl.sh@19 -- # [[ 0000:00:07.0 == *:*:*.* ]] 00:05:10.510 14:45:43 -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:05:10.510 14:45:43 -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\0\0\:\0\7\.\0* ]] 00:05:10.510 14:45:43 -- setup/acl.sh@22 -- # devs+=("$dev") 00:05:10.510 14:45:43 -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:05:10.510 14:45:43 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:05:10.510 14:45:43 -- setup/acl.sh@24 -- # (( 2 > 0 )) 00:05:10.510 14:45:43 -- setup/acl.sh@54 -- # run_test denied denied 00:05:10.510 14:45:43 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:10.510 14:45:43 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:10.510 14:45:43 -- common/autotest_common.sh@10 -- # set +x 00:05:10.510 ************************************ 00:05:10.510 START TEST denied 00:05:10.510 ************************************ 00:05:10.510 14:45:43 -- common/autotest_common.sh@1114 -- # denied 00:05:10.510 14:45:43 -- setup/acl.sh@38 -- # PCI_BLOCKED=' 0000:00:06.0' 00:05:10.510 14:45:43 -- setup/acl.sh@38 -- # setup output config 00:05:10.510 14:45:43 -- setup/common.sh@9 -- # [[ output == output ]] 00:05:10.510 14:45:43 -- setup/acl.sh@39 -- # grep 'Skipping denied controller at 0000:00:06.0' 00:05:10.510 14:45:43 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:05:11.447 0000:00:06.0 (1b36 0010): Skipping denied controller at 0000:00:06.0 00:05:11.447 14:45:44 -- setup/acl.sh@40 -- # verify 0000:00:06.0 00:05:11.447 14:45:44 -- setup/acl.sh@28 -- # local dev driver 00:05:11.447 14:45:44 -- setup/acl.sh@30 -- # for dev in "$@" 00:05:11.447 14:45:44 -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:00:06.0 ]] 00:05:11.447 14:45:44 -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:00:06.0/driver 00:05:11.447 14:45:44 -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:05:11.447 14:45:44 -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:05:11.447 14:45:44 -- setup/acl.sh@41 -- # setup reset 00:05:11.447 14:45:44 -- setup/common.sh@9 -- # [[ reset == output ]] 00:05:11.447 14:45:44 -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:05:12.014 00:05:12.014 real 0m1.599s 00:05:12.014 user 0m0.618s 00:05:12.014 sys 0m0.937s 00:05:12.014 14:45:45 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:12.014 ************************************ 00:05:12.014 END TEST denied 00:05:12.014 ************************************ 00:05:12.014 14:45:45 -- common/autotest_common.sh@10 -- # set +x 00:05:12.014 14:45:45 -- setup/acl.sh@55 -- # run_test allowed allowed 00:05:12.014 14:45:45 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:12.014 14:45:45 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:12.014 14:45:45 -- common/autotest_common.sh@10 -- # set +x 00:05:12.273 ************************************ 00:05:12.273 START TEST allowed 00:05:12.273 ************************************ 00:05:12.273 14:45:45 -- common/autotest_common.sh@1114 -- # allowed 00:05:12.273 14:45:45 -- setup/acl.sh@46 -- # grep -E '0000:00:06.0 .*: nvme -> .*' 00:05:12.273 14:45:45 -- setup/acl.sh@45 -- # PCI_ALLOWED=0000:00:06.0 00:05:12.273 14:45:45 -- setup/acl.sh@45 -- # setup output config 00:05:12.273 14:45:45 -- setup/common.sh@9 -- # [[ output == output ]] 00:05:12.273 14:45:45 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:05:13.208 0000:00:06.0 (1b36 0010): nvme -> uio_pci_generic 00:05:13.208 14:45:45 -- setup/acl.sh@47 -- # verify 0000:00:07.0 00:05:13.208 14:45:45 -- setup/acl.sh@28 -- # local dev driver 00:05:13.208 14:45:45 -- setup/acl.sh@30 -- # for dev in "$@" 00:05:13.208 14:45:45 -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:00:07.0 ]] 00:05:13.208 14:45:45 -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:00:07.0/driver 00:05:13.208 14:45:45 -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:05:13.208 14:45:45 -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:05:13.208 14:45:45 -- setup/acl.sh@48 -- # setup reset 00:05:13.208 14:45:45 -- setup/common.sh@9 -- # [[ reset == output ]] 00:05:13.208 14:45:45 -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:05:13.775 00:05:13.775 real 0m1.648s 00:05:13.775 user 0m0.740s 00:05:13.775 sys 0m0.905s 00:05:13.775 14:45:46 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:13.775 14:45:46 -- common/autotest_common.sh@10 -- # set +x 00:05:13.775 ************************************ 00:05:13.775 END TEST allowed 00:05:13.775 ************************************ 00:05:13.775 00:05:13.775 real 0m4.748s 00:05:13.775 user 0m2.079s 00:05:13.775 sys 0m2.649s 00:05:13.775 14:45:46 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:13.775 14:45:46 -- common/autotest_common.sh@10 -- # set +x 00:05:13.775 ************************************ 00:05:13.775 END TEST acl 00:05:13.775 ************************************ 00:05:13.775 14:45:46 -- setup/test-setup.sh@13 -- # run_test hugepages /home/vagrant/spdk_repo/spdk/test/setup/hugepages.sh 00:05:13.775 14:45:46 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:13.775 14:45:46 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:13.775 14:45:46 -- common/autotest_common.sh@10 -- # set +x 00:05:13.775 ************************************ 00:05:13.775 START TEST hugepages 00:05:13.775 ************************************ 00:05:13.775 14:45:46 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/setup/hugepages.sh 00:05:14.035 * Looking for test storage... 00:05:14.035 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:05:14.035 14:45:46 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:05:14.035 14:45:46 -- common/autotest_common.sh@1690 -- # lcov --version 00:05:14.035 14:45:46 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:05:14.035 14:45:47 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:05:14.035 14:45:47 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:05:14.035 14:45:47 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:05:14.035 14:45:47 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:05:14.035 14:45:47 -- scripts/common.sh@335 -- # IFS=.-: 00:05:14.035 14:45:47 -- scripts/common.sh@335 -- # read -ra ver1 00:05:14.035 14:45:47 -- scripts/common.sh@336 -- # IFS=.-: 00:05:14.035 14:45:47 -- scripts/common.sh@336 -- # read -ra ver2 00:05:14.035 14:45:47 -- scripts/common.sh@337 -- # local 'op=<' 00:05:14.035 14:45:47 -- scripts/common.sh@339 -- # ver1_l=2 00:05:14.035 14:45:47 -- scripts/common.sh@340 -- # ver2_l=1 00:05:14.035 14:45:47 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:05:14.035 14:45:47 -- scripts/common.sh@343 -- # case "$op" in 00:05:14.035 14:45:47 -- scripts/common.sh@344 -- # : 1 00:05:14.035 14:45:47 -- scripts/common.sh@363 -- # (( v = 0 )) 00:05:14.035 14:45:47 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:14.035 14:45:47 -- scripts/common.sh@364 -- # decimal 1 00:05:14.035 14:45:47 -- scripts/common.sh@352 -- # local d=1 00:05:14.035 14:45:47 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:14.035 14:45:47 -- scripts/common.sh@354 -- # echo 1 00:05:14.035 14:45:47 -- scripts/common.sh@364 -- # ver1[v]=1 00:05:14.035 14:45:47 -- scripts/common.sh@365 -- # decimal 2 00:05:14.035 14:45:47 -- scripts/common.sh@352 -- # local d=2 00:05:14.035 14:45:47 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:14.035 14:45:47 -- scripts/common.sh@354 -- # echo 2 00:05:14.036 14:45:47 -- scripts/common.sh@365 -- # ver2[v]=2 00:05:14.036 14:45:47 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:05:14.036 14:45:47 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:05:14.036 14:45:47 -- scripts/common.sh@367 -- # return 0 00:05:14.036 14:45:47 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:14.036 14:45:47 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:05:14.036 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:14.036 --rc genhtml_branch_coverage=1 00:05:14.036 --rc genhtml_function_coverage=1 00:05:14.036 --rc genhtml_legend=1 00:05:14.036 --rc geninfo_all_blocks=1 00:05:14.036 --rc geninfo_unexecuted_blocks=1 00:05:14.036 00:05:14.036 ' 00:05:14.036 14:45:47 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:05:14.036 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:14.036 --rc genhtml_branch_coverage=1 00:05:14.036 --rc genhtml_function_coverage=1 00:05:14.036 --rc genhtml_legend=1 00:05:14.036 --rc geninfo_all_blocks=1 00:05:14.036 --rc geninfo_unexecuted_blocks=1 00:05:14.036 00:05:14.036 ' 00:05:14.036 14:45:47 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:05:14.036 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:14.036 --rc genhtml_branch_coverage=1 00:05:14.036 --rc genhtml_function_coverage=1 00:05:14.036 --rc genhtml_legend=1 00:05:14.036 --rc geninfo_all_blocks=1 00:05:14.036 --rc geninfo_unexecuted_blocks=1 00:05:14.036 00:05:14.036 ' 00:05:14.036 14:45:47 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:05:14.036 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:14.036 --rc genhtml_branch_coverage=1 00:05:14.036 --rc genhtml_function_coverage=1 00:05:14.036 --rc genhtml_legend=1 00:05:14.036 --rc geninfo_all_blocks=1 00:05:14.036 --rc geninfo_unexecuted_blocks=1 00:05:14.036 00:05:14.036 ' 00:05:14.036 14:45:47 -- setup/hugepages.sh@10 -- # nodes_sys=() 00:05:14.036 14:45:47 -- setup/hugepages.sh@10 -- # declare -a nodes_sys 00:05:14.036 14:45:47 -- setup/hugepages.sh@12 -- # declare -i default_hugepages=0 00:05:14.036 14:45:47 -- setup/hugepages.sh@13 -- # declare -i no_nodes=0 00:05:14.036 14:45:47 -- setup/hugepages.sh@14 -- # declare -i nr_hugepages=0 00:05:14.036 14:45:47 -- setup/hugepages.sh@16 -- # get_meminfo Hugepagesize 00:05:14.036 14:45:47 -- setup/common.sh@17 -- # local get=Hugepagesize 00:05:14.036 14:45:47 -- setup/common.sh@18 -- # local node= 00:05:14.036 14:45:47 -- setup/common.sh@19 -- # local var val 00:05:14.036 14:45:47 -- setup/common.sh@20 -- # local mem_f mem 00:05:14.036 14:45:47 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:14.036 14:45:47 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:14.036 14:45:47 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:14.036 14:45:47 -- setup/common.sh@28 -- # mapfile -t mem 00:05:14.036 14:45:47 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:14.036 14:45:47 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.036 14:45:47 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.036 14:45:47 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239108 kB' 'MemFree: 4423560 kB' 'MemAvailable: 7353284 kB' 'Buffers: 2684 kB' 'Cached: 3130204 kB' 'SwapCached: 0 kB' 'Active: 496240 kB' 'Inactive: 2753224 kB' 'Active(anon): 127088 kB' 'Inactive(anon): 0 kB' 'Active(file): 369152 kB' 'Inactive(file): 2753224 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 312 kB' 'Writeback: 0 kB' 'AnonPages: 118256 kB' 'Mapped: 50888 kB' 'Shmem: 10512 kB' 'KReclaimable: 88592 kB' 'Slab: 191676 kB' 'SReclaimable: 88592 kB' 'SUnreclaim: 103084 kB' 'KernelStack: 6800 kB' 'PageTables: 4564 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 12411004 kB' 'Committed_AS: 320988 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55480 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 2048' 'HugePages_Free: 2048' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 4194304 kB' 'DirectMap4k: 206700 kB' 'DirectMap2M: 6084608 kB' 'DirectMap1G: 8388608 kB' 00:05:14.036 14:45:47 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:14.036 14:45:47 -- setup/common.sh@32 -- # continue 00:05:14.036 14:45:47 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.036 14:45:47 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.036 14:45:47 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:14.036 14:45:47 -- setup/common.sh@32 -- # continue 00:05:14.036 14:45:47 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.036 14:45:47 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.036 14:45:47 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:14.036 14:45:47 -- setup/common.sh@32 -- # continue 00:05:14.036 14:45:47 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.036 14:45:47 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.036 14:45:47 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:14.036 14:45:47 -- setup/common.sh@32 -- # continue 00:05:14.036 14:45:47 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.036 14:45:47 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.036 14:45:47 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:14.036 14:45:47 -- setup/common.sh@32 -- # continue 00:05:14.036 14:45:47 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.036 14:45:47 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.036 14:45:47 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:14.036 14:45:47 -- setup/common.sh@32 -- # continue 00:05:14.036 14:45:47 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.036 14:45:47 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.036 14:45:47 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:14.036 14:45:47 -- setup/common.sh@32 -- # continue 00:05:14.036 14:45:47 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.036 14:45:47 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.036 14:45:47 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:14.036 14:45:47 -- setup/common.sh@32 -- # continue 00:05:14.036 14:45:47 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.036 14:45:47 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.036 14:45:47 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:14.036 14:45:47 -- setup/common.sh@32 -- # continue 00:05:14.036 14:45:47 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.036 14:45:47 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.036 14:45:47 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:14.036 14:45:47 -- setup/common.sh@32 -- # continue 00:05:14.036 14:45:47 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.036 14:45:47 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.036 14:45:47 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:14.036 14:45:47 -- setup/common.sh@32 -- # continue 00:05:14.036 14:45:47 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.036 14:45:47 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.036 14:45:47 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:14.036 14:45:47 -- setup/common.sh@32 -- # continue 00:05:14.036 14:45:47 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.036 14:45:47 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.036 14:45:47 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:14.036 14:45:47 -- setup/common.sh@32 -- # continue 00:05:14.036 14:45:47 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.036 14:45:47 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.036 14:45:47 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:14.036 14:45:47 -- setup/common.sh@32 -- # continue 00:05:14.036 14:45:47 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.036 14:45:47 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.036 14:45:47 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:14.036 14:45:47 -- setup/common.sh@32 -- # continue 00:05:14.036 14:45:47 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.036 14:45:47 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.036 14:45:47 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:14.036 14:45:47 -- setup/common.sh@32 -- # continue 00:05:14.036 14:45:47 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.036 14:45:47 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.036 14:45:47 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:14.036 14:45:47 -- setup/common.sh@32 -- # continue 00:05:14.036 14:45:47 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.036 14:45:47 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.036 14:45:47 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:14.036 14:45:47 -- setup/common.sh@32 -- # continue 00:05:14.036 14:45:47 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.036 14:45:47 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.036 14:45:47 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:14.036 14:45:47 -- setup/common.sh@32 -- # continue 00:05:14.036 14:45:47 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.036 14:45:47 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.036 14:45:47 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:14.036 14:45:47 -- setup/common.sh@32 -- # continue 00:05:14.036 14:45:47 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.036 14:45:47 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.036 14:45:47 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:14.036 14:45:47 -- setup/common.sh@32 -- # continue 00:05:14.036 14:45:47 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.036 14:45:47 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.036 14:45:47 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:14.036 14:45:47 -- setup/common.sh@32 -- # continue 00:05:14.036 14:45:47 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.036 14:45:47 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.036 14:45:47 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:14.036 14:45:47 -- setup/common.sh@32 -- # continue 00:05:14.036 14:45:47 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.037 14:45:47 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.037 14:45:47 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:14.037 14:45:47 -- setup/common.sh@32 -- # continue 00:05:14.037 14:45:47 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.037 14:45:47 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.037 14:45:47 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:14.037 14:45:47 -- setup/common.sh@32 -- # continue 00:05:14.037 14:45:47 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.037 14:45:47 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.037 14:45:47 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:14.037 14:45:47 -- setup/common.sh@32 -- # continue 00:05:14.037 14:45:47 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.037 14:45:47 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.037 14:45:47 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:14.037 14:45:47 -- setup/common.sh@32 -- # continue 00:05:14.037 14:45:47 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.037 14:45:47 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.037 14:45:47 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:14.037 14:45:47 -- setup/common.sh@32 -- # continue 00:05:14.037 14:45:47 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.037 14:45:47 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.037 14:45:47 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:14.037 14:45:47 -- setup/common.sh@32 -- # continue 00:05:14.037 14:45:47 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.037 14:45:47 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.037 14:45:47 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:14.037 14:45:47 -- setup/common.sh@32 -- # continue 00:05:14.037 14:45:47 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.037 14:45:47 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.037 14:45:47 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:14.037 14:45:47 -- setup/common.sh@32 -- # continue 00:05:14.037 14:45:47 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.037 14:45:47 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.037 14:45:47 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:14.037 14:45:47 -- setup/common.sh@32 -- # continue 00:05:14.037 14:45:47 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.037 14:45:47 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.037 14:45:47 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:14.037 14:45:47 -- setup/common.sh@32 -- # continue 00:05:14.037 14:45:47 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.037 14:45:47 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.037 14:45:47 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:14.037 14:45:47 -- setup/common.sh@32 -- # continue 00:05:14.037 14:45:47 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.037 14:45:47 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.037 14:45:47 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:14.037 14:45:47 -- setup/common.sh@32 -- # continue 00:05:14.037 14:45:47 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.037 14:45:47 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.037 14:45:47 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:14.037 14:45:47 -- setup/common.sh@32 -- # continue 00:05:14.037 14:45:47 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.037 14:45:47 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.037 14:45:47 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:14.037 14:45:47 -- setup/common.sh@32 -- # continue 00:05:14.037 14:45:47 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.037 14:45:47 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.037 14:45:47 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:14.037 14:45:47 -- setup/common.sh@32 -- # continue 00:05:14.037 14:45:47 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.037 14:45:47 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.037 14:45:47 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:14.037 14:45:47 -- setup/common.sh@32 -- # continue 00:05:14.037 14:45:47 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.037 14:45:47 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.037 14:45:47 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:14.037 14:45:47 -- setup/common.sh@32 -- # continue 00:05:14.037 14:45:47 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.037 14:45:47 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.037 14:45:47 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:14.037 14:45:47 -- setup/common.sh@32 -- # continue 00:05:14.037 14:45:47 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.037 14:45:47 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.037 14:45:47 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:14.037 14:45:47 -- setup/common.sh@32 -- # continue 00:05:14.037 14:45:47 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.037 14:45:47 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.037 14:45:47 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:14.037 14:45:47 -- setup/common.sh@32 -- # continue 00:05:14.037 14:45:47 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.037 14:45:47 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.037 14:45:47 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:14.037 14:45:47 -- setup/common.sh@32 -- # continue 00:05:14.037 14:45:47 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.037 14:45:47 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.037 14:45:47 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:14.037 14:45:47 -- setup/common.sh@32 -- # continue 00:05:14.037 14:45:47 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.037 14:45:47 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.037 14:45:47 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:14.037 14:45:47 -- setup/common.sh@32 -- # continue 00:05:14.037 14:45:47 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.037 14:45:47 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.037 14:45:47 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:14.037 14:45:47 -- setup/common.sh@32 -- # continue 00:05:14.037 14:45:47 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.037 14:45:47 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.037 14:45:47 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:14.037 14:45:47 -- setup/common.sh@32 -- # continue 00:05:14.037 14:45:47 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.037 14:45:47 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.037 14:45:47 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:14.037 14:45:47 -- setup/common.sh@32 -- # continue 00:05:14.037 14:45:47 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.037 14:45:47 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.037 14:45:47 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:14.037 14:45:47 -- setup/common.sh@32 -- # continue 00:05:14.037 14:45:47 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.037 14:45:47 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.037 14:45:47 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:14.037 14:45:47 -- setup/common.sh@32 -- # continue 00:05:14.037 14:45:47 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.037 14:45:47 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.037 14:45:47 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:14.037 14:45:47 -- setup/common.sh@32 -- # continue 00:05:14.037 14:45:47 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.037 14:45:47 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.037 14:45:47 -- setup/common.sh@32 -- # [[ Hugepagesize == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:14.037 14:45:47 -- setup/common.sh@33 -- # echo 2048 00:05:14.037 14:45:47 -- setup/common.sh@33 -- # return 0 00:05:14.037 14:45:47 -- setup/hugepages.sh@16 -- # default_hugepages=2048 00:05:14.037 14:45:47 -- setup/hugepages.sh@17 -- # default_huge_nr=/sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages 00:05:14.037 14:45:47 -- setup/hugepages.sh@18 -- # global_huge_nr=/proc/sys/vm/nr_hugepages 00:05:14.037 14:45:47 -- setup/hugepages.sh@21 -- # unset -v HUGE_EVEN_ALLOC 00:05:14.037 14:45:47 -- setup/hugepages.sh@22 -- # unset -v HUGEMEM 00:05:14.037 14:45:47 -- setup/hugepages.sh@23 -- # unset -v HUGENODE 00:05:14.037 14:45:47 -- setup/hugepages.sh@24 -- # unset -v NRHUGE 00:05:14.037 14:45:47 -- setup/hugepages.sh@207 -- # get_nodes 00:05:14.037 14:45:47 -- setup/hugepages.sh@27 -- # local node 00:05:14.037 14:45:47 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:14.037 14:45:47 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=2048 00:05:14.037 14:45:47 -- setup/hugepages.sh@32 -- # no_nodes=1 00:05:14.037 14:45:47 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:05:14.037 14:45:47 -- setup/hugepages.sh@208 -- # clear_hp 00:05:14.037 14:45:47 -- setup/hugepages.sh@37 -- # local node hp 00:05:14.037 14:45:47 -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:05:14.037 14:45:47 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:05:14.037 14:45:47 -- setup/hugepages.sh@41 -- # echo 0 00:05:14.037 14:45:47 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:05:14.037 14:45:47 -- setup/hugepages.sh@41 -- # echo 0 00:05:14.037 14:45:47 -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:05:14.037 14:45:47 -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:05:14.037 14:45:47 -- setup/hugepages.sh@210 -- # run_test default_setup default_setup 00:05:14.037 14:45:47 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:14.037 14:45:47 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:14.037 14:45:47 -- common/autotest_common.sh@10 -- # set +x 00:05:14.296 ************************************ 00:05:14.296 START TEST default_setup 00:05:14.296 ************************************ 00:05:14.296 14:45:47 -- common/autotest_common.sh@1114 -- # default_setup 00:05:14.296 14:45:47 -- setup/hugepages.sh@136 -- # get_test_nr_hugepages 2097152 0 00:05:14.296 14:45:47 -- setup/hugepages.sh@49 -- # local size=2097152 00:05:14.296 14:45:47 -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:05:14.296 14:45:47 -- setup/hugepages.sh@51 -- # shift 00:05:14.296 14:45:47 -- setup/hugepages.sh@52 -- # node_ids=('0') 00:05:14.296 14:45:47 -- setup/hugepages.sh@52 -- # local node_ids 00:05:14.296 14:45:47 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:05:14.296 14:45:47 -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:05:14.296 14:45:47 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:05:14.296 14:45:47 -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:05:14.296 14:45:47 -- setup/hugepages.sh@62 -- # local user_nodes 00:05:14.296 14:45:47 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:05:14.296 14:45:47 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:05:14.296 14:45:47 -- setup/hugepages.sh@67 -- # nodes_test=() 00:05:14.296 14:45:47 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:05:14.296 14:45:47 -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:05:14.296 14:45:47 -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:05:14.296 14:45:47 -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:05:14.296 14:45:47 -- setup/hugepages.sh@73 -- # return 0 00:05:14.296 14:45:47 -- setup/hugepages.sh@137 -- # setup output 00:05:14.296 14:45:47 -- setup/common.sh@9 -- # [[ output == output ]] 00:05:14.296 14:45:47 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:14.864 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:14.864 0000:00:06.0 (1b36 0010): nvme -> uio_pci_generic 00:05:15.127 0000:00:07.0 (1b36 0010): nvme -> uio_pci_generic 00:05:15.127 14:45:48 -- setup/hugepages.sh@138 -- # verify_nr_hugepages 00:05:15.127 14:45:48 -- setup/hugepages.sh@89 -- # local node 00:05:15.127 14:45:48 -- setup/hugepages.sh@90 -- # local sorted_t 00:05:15.127 14:45:48 -- setup/hugepages.sh@91 -- # local sorted_s 00:05:15.127 14:45:48 -- setup/hugepages.sh@92 -- # local surp 00:05:15.127 14:45:48 -- setup/hugepages.sh@93 -- # local resv 00:05:15.127 14:45:48 -- setup/hugepages.sh@94 -- # local anon 00:05:15.127 14:45:48 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:05:15.127 14:45:48 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:05:15.127 14:45:48 -- setup/common.sh@17 -- # local get=AnonHugePages 00:05:15.127 14:45:48 -- setup/common.sh@18 -- # local node= 00:05:15.127 14:45:48 -- setup/common.sh@19 -- # local var val 00:05:15.127 14:45:48 -- setup/common.sh@20 -- # local mem_f mem 00:05:15.127 14:45:48 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:15.127 14:45:48 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:15.127 14:45:48 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:15.127 14:45:48 -- setup/common.sh@28 -- # mapfile -t mem 00:05:15.127 14:45:48 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:15.127 14:45:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.127 14:45:48 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239108 kB' 'MemFree: 6516056 kB' 'MemAvailable: 9445636 kB' 'Buffers: 2684 kB' 'Cached: 3130192 kB' 'SwapCached: 0 kB' 'Active: 497780 kB' 'Inactive: 2753224 kB' 'Active(anon): 128628 kB' 'Inactive(anon): 0 kB' 'Active(file): 369152 kB' 'Inactive(file): 2753224 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 332 kB' 'Writeback: 0 kB' 'AnonPages: 119728 kB' 'Mapped: 50884 kB' 'Shmem: 10488 kB' 'KReclaimable: 88300 kB' 'Slab: 191560 kB' 'SReclaimable: 88300 kB' 'SUnreclaim: 103260 kB' 'KernelStack: 6720 kB' 'PageTables: 4360 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459580 kB' 'Committed_AS: 323252 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55464 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 206700 kB' 'DirectMap2M: 6084608 kB' 'DirectMap1G: 8388608 kB' 00:05:15.127 14:45:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.127 14:45:48 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:15.127 14:45:48 -- setup/common.sh@32 -- # continue 00:05:15.127 14:45:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.127 14:45:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.127 14:45:48 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:15.127 14:45:48 -- setup/common.sh@32 -- # continue 00:05:15.127 14:45:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.127 14:45:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.127 14:45:48 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:15.127 14:45:48 -- setup/common.sh@32 -- # continue 00:05:15.127 14:45:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.127 14:45:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.127 14:45:48 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:15.127 14:45:48 -- setup/common.sh@32 -- # continue 00:05:15.127 14:45:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.127 14:45:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.127 14:45:48 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:15.127 14:45:48 -- setup/common.sh@32 -- # continue 00:05:15.127 14:45:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.127 14:45:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.127 14:45:48 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:15.127 14:45:48 -- setup/common.sh@32 -- # continue 00:05:15.127 14:45:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.127 14:45:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.127 14:45:48 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:15.127 14:45:48 -- setup/common.sh@32 -- # continue 00:05:15.127 14:45:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.127 14:45:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.127 14:45:48 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:15.127 14:45:48 -- setup/common.sh@32 -- # continue 00:05:15.127 14:45:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.127 14:45:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.127 14:45:48 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:15.127 14:45:48 -- setup/common.sh@32 -- # continue 00:05:15.127 14:45:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.127 14:45:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.127 14:45:48 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:15.127 14:45:48 -- setup/common.sh@32 -- # continue 00:05:15.127 14:45:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.127 14:45:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.127 14:45:48 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:15.127 14:45:48 -- setup/common.sh@32 -- # continue 00:05:15.127 14:45:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.127 14:45:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.127 14:45:48 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:15.127 14:45:48 -- setup/common.sh@32 -- # continue 00:05:15.127 14:45:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.127 14:45:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.127 14:45:48 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:15.127 14:45:48 -- setup/common.sh@32 -- # continue 00:05:15.127 14:45:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.127 14:45:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.127 14:45:48 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:15.127 14:45:48 -- setup/common.sh@32 -- # continue 00:05:15.127 14:45:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.127 14:45:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.127 14:45:48 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:15.127 14:45:48 -- setup/common.sh@32 -- # continue 00:05:15.127 14:45:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.127 14:45:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.127 14:45:48 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:15.127 14:45:48 -- setup/common.sh@32 -- # continue 00:05:15.127 14:45:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.127 14:45:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.127 14:45:48 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:15.127 14:45:48 -- setup/common.sh@32 -- # continue 00:05:15.127 14:45:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.127 14:45:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.127 14:45:48 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:15.127 14:45:48 -- setup/common.sh@32 -- # continue 00:05:15.127 14:45:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.127 14:45:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.127 14:45:48 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:15.127 14:45:48 -- setup/common.sh@32 -- # continue 00:05:15.127 14:45:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.127 14:45:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.127 14:45:48 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:15.127 14:45:48 -- setup/common.sh@32 -- # continue 00:05:15.127 14:45:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.127 14:45:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.127 14:45:48 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:15.127 14:45:48 -- setup/common.sh@32 -- # continue 00:05:15.127 14:45:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.127 14:45:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.127 14:45:48 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:15.127 14:45:48 -- setup/common.sh@32 -- # continue 00:05:15.127 14:45:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.127 14:45:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.127 14:45:48 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:15.127 14:45:48 -- setup/common.sh@32 -- # continue 00:05:15.127 14:45:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.127 14:45:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.127 14:45:48 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:15.127 14:45:48 -- setup/common.sh@32 -- # continue 00:05:15.127 14:45:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.127 14:45:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.127 14:45:48 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:15.127 14:45:48 -- setup/common.sh@32 -- # continue 00:05:15.127 14:45:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.127 14:45:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.127 14:45:48 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:15.127 14:45:48 -- setup/common.sh@32 -- # continue 00:05:15.127 14:45:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.127 14:45:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.127 14:45:48 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:15.127 14:45:48 -- setup/common.sh@32 -- # continue 00:05:15.127 14:45:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.127 14:45:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.128 14:45:48 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:15.128 14:45:48 -- setup/common.sh@32 -- # continue 00:05:15.128 14:45:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.128 14:45:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.128 14:45:48 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:15.128 14:45:48 -- setup/common.sh@32 -- # continue 00:05:15.128 14:45:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.128 14:45:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.128 14:45:48 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:15.128 14:45:48 -- setup/common.sh@32 -- # continue 00:05:15.128 14:45:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.128 14:45:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.128 14:45:48 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:15.128 14:45:48 -- setup/common.sh@32 -- # continue 00:05:15.128 14:45:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.128 14:45:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.128 14:45:48 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:15.128 14:45:48 -- setup/common.sh@32 -- # continue 00:05:15.128 14:45:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.128 14:45:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.128 14:45:48 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:15.128 14:45:48 -- setup/common.sh@32 -- # continue 00:05:15.128 14:45:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.128 14:45:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.128 14:45:48 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:15.128 14:45:48 -- setup/common.sh@32 -- # continue 00:05:15.128 14:45:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.128 14:45:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.128 14:45:48 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:15.128 14:45:48 -- setup/common.sh@32 -- # continue 00:05:15.128 14:45:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.128 14:45:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.128 14:45:48 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:15.128 14:45:48 -- setup/common.sh@32 -- # continue 00:05:15.128 14:45:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.128 14:45:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.128 14:45:48 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:15.128 14:45:48 -- setup/common.sh@32 -- # continue 00:05:15.128 14:45:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.128 14:45:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.128 14:45:48 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:15.128 14:45:48 -- setup/common.sh@32 -- # continue 00:05:15.128 14:45:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.128 14:45:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.128 14:45:48 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:15.128 14:45:48 -- setup/common.sh@32 -- # continue 00:05:15.128 14:45:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.128 14:45:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.128 14:45:48 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:15.128 14:45:48 -- setup/common.sh@32 -- # continue 00:05:15.128 14:45:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.128 14:45:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.128 14:45:48 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:15.128 14:45:48 -- setup/common.sh@33 -- # echo 0 00:05:15.128 14:45:48 -- setup/common.sh@33 -- # return 0 00:05:15.128 14:45:48 -- setup/hugepages.sh@97 -- # anon=0 00:05:15.128 14:45:48 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:05:15.128 14:45:48 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:15.128 14:45:48 -- setup/common.sh@18 -- # local node= 00:05:15.128 14:45:48 -- setup/common.sh@19 -- # local var val 00:05:15.128 14:45:48 -- setup/common.sh@20 -- # local mem_f mem 00:05:15.128 14:45:48 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:15.128 14:45:48 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:15.128 14:45:48 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:15.128 14:45:48 -- setup/common.sh@28 -- # mapfile -t mem 00:05:15.128 14:45:48 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:15.128 14:45:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.128 14:45:48 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239108 kB' 'MemFree: 6515808 kB' 'MemAvailable: 9445388 kB' 'Buffers: 2684 kB' 'Cached: 3130192 kB' 'SwapCached: 0 kB' 'Active: 497544 kB' 'Inactive: 2753224 kB' 'Active(anon): 128392 kB' 'Inactive(anon): 0 kB' 'Active(file): 369152 kB' 'Inactive(file): 2753224 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 332 kB' 'Writeback: 0 kB' 'AnonPages: 119464 kB' 'Mapped: 50764 kB' 'Shmem: 10488 kB' 'KReclaimable: 88300 kB' 'Slab: 191552 kB' 'SReclaimable: 88300 kB' 'SUnreclaim: 103252 kB' 'KernelStack: 6736 kB' 'PageTables: 4388 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459580 kB' 'Committed_AS: 323252 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55448 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 206700 kB' 'DirectMap2M: 6084608 kB' 'DirectMap1G: 8388608 kB' 00:05:15.128 14:45:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.128 14:45:48 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.128 14:45:48 -- setup/common.sh@32 -- # continue 00:05:15.128 14:45:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.128 14:45:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.128 14:45:48 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.128 14:45:48 -- setup/common.sh@32 -- # continue 00:05:15.128 14:45:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.128 14:45:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.128 14:45:48 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.128 14:45:48 -- setup/common.sh@32 -- # continue 00:05:15.128 14:45:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.128 14:45:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.128 14:45:48 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.128 14:45:48 -- setup/common.sh@32 -- # continue 00:05:15.128 14:45:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.128 14:45:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.128 14:45:48 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.128 14:45:48 -- setup/common.sh@32 -- # continue 00:05:15.128 14:45:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.128 14:45:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.128 14:45:48 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.128 14:45:48 -- setup/common.sh@32 -- # continue 00:05:15.128 14:45:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.128 14:45:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.128 14:45:48 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.128 14:45:48 -- setup/common.sh@32 -- # continue 00:05:15.128 14:45:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.128 14:45:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.128 14:45:48 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.128 14:45:48 -- setup/common.sh@32 -- # continue 00:05:15.128 14:45:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.128 14:45:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.128 14:45:48 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.128 14:45:48 -- setup/common.sh@32 -- # continue 00:05:15.128 14:45:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.128 14:45:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.128 14:45:48 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.128 14:45:48 -- setup/common.sh@32 -- # continue 00:05:15.128 14:45:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.128 14:45:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.128 14:45:48 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.128 14:45:48 -- setup/common.sh@32 -- # continue 00:05:15.128 14:45:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.128 14:45:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.128 14:45:48 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.128 14:45:48 -- setup/common.sh@32 -- # continue 00:05:15.128 14:45:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.128 14:45:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.128 14:45:48 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.128 14:45:48 -- setup/common.sh@32 -- # continue 00:05:15.128 14:45:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.128 14:45:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.128 14:45:48 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.128 14:45:48 -- setup/common.sh@32 -- # continue 00:05:15.128 14:45:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.128 14:45:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.128 14:45:48 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.128 14:45:48 -- setup/common.sh@32 -- # continue 00:05:15.128 14:45:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.128 14:45:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.128 14:45:48 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.128 14:45:48 -- setup/common.sh@32 -- # continue 00:05:15.128 14:45:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.128 14:45:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.128 14:45:48 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.128 14:45:48 -- setup/common.sh@32 -- # continue 00:05:15.128 14:45:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.128 14:45:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.128 14:45:48 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.128 14:45:48 -- setup/common.sh@32 -- # continue 00:05:15.128 14:45:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.128 14:45:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.128 14:45:48 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.128 14:45:48 -- setup/common.sh@32 -- # continue 00:05:15.128 14:45:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.128 14:45:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.128 14:45:48 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.128 14:45:48 -- setup/common.sh@32 -- # continue 00:05:15.129 14:45:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.129 14:45:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.129 14:45:48 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.129 14:45:48 -- setup/common.sh@32 -- # continue 00:05:15.129 14:45:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.129 14:45:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.129 14:45:48 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.129 14:45:48 -- setup/common.sh@32 -- # continue 00:05:15.129 14:45:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.129 14:45:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.129 14:45:48 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.129 14:45:48 -- setup/common.sh@32 -- # continue 00:05:15.129 14:45:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.129 14:45:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.129 14:45:48 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.129 14:45:48 -- setup/common.sh@32 -- # continue 00:05:15.129 14:45:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.129 14:45:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.129 14:45:48 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.129 14:45:48 -- setup/common.sh@32 -- # continue 00:05:15.129 14:45:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.129 14:45:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.129 14:45:48 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.129 14:45:48 -- setup/common.sh@32 -- # continue 00:05:15.129 14:45:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.129 14:45:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.129 14:45:48 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.129 14:45:48 -- setup/common.sh@32 -- # continue 00:05:15.129 14:45:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.129 14:45:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.129 14:45:48 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.129 14:45:48 -- setup/common.sh@32 -- # continue 00:05:15.129 14:45:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.129 14:45:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.129 14:45:48 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.129 14:45:48 -- setup/common.sh@32 -- # continue 00:05:15.129 14:45:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.129 14:45:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.129 14:45:48 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.129 14:45:48 -- setup/common.sh@32 -- # continue 00:05:15.129 14:45:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.129 14:45:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.129 14:45:48 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.129 14:45:48 -- setup/common.sh@32 -- # continue 00:05:15.129 14:45:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.129 14:45:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.129 14:45:48 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.129 14:45:48 -- setup/common.sh@32 -- # continue 00:05:15.129 14:45:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.129 14:45:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.129 14:45:48 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.129 14:45:48 -- setup/common.sh@32 -- # continue 00:05:15.129 14:45:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.129 14:45:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.129 14:45:48 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.129 14:45:48 -- setup/common.sh@32 -- # continue 00:05:15.129 14:45:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.129 14:45:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.129 14:45:48 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.129 14:45:48 -- setup/common.sh@32 -- # continue 00:05:15.129 14:45:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.129 14:45:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.129 14:45:48 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.129 14:45:48 -- setup/common.sh@32 -- # continue 00:05:15.129 14:45:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.129 14:45:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.129 14:45:48 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.129 14:45:48 -- setup/common.sh@32 -- # continue 00:05:15.129 14:45:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.129 14:45:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.129 14:45:48 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.129 14:45:48 -- setup/common.sh@32 -- # continue 00:05:15.129 14:45:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.129 14:45:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.129 14:45:48 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.129 14:45:48 -- setup/common.sh@32 -- # continue 00:05:15.129 14:45:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.129 14:45:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.129 14:45:48 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.129 14:45:48 -- setup/common.sh@32 -- # continue 00:05:15.129 14:45:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.129 14:45:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.129 14:45:48 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.129 14:45:48 -- setup/common.sh@32 -- # continue 00:05:15.129 14:45:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.129 14:45:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.129 14:45:48 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.129 14:45:48 -- setup/common.sh@32 -- # continue 00:05:15.129 14:45:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.129 14:45:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.129 14:45:48 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.129 14:45:48 -- setup/common.sh@32 -- # continue 00:05:15.129 14:45:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.129 14:45:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.129 14:45:48 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.129 14:45:48 -- setup/common.sh@32 -- # continue 00:05:15.129 14:45:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.129 14:45:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.129 14:45:48 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.129 14:45:48 -- setup/common.sh@32 -- # continue 00:05:15.129 14:45:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.129 14:45:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.129 14:45:48 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.129 14:45:48 -- setup/common.sh@32 -- # continue 00:05:15.129 14:45:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.129 14:45:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.129 14:45:48 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.129 14:45:48 -- setup/common.sh@32 -- # continue 00:05:15.129 14:45:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.129 14:45:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.129 14:45:48 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.129 14:45:48 -- setup/common.sh@32 -- # continue 00:05:15.129 14:45:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.129 14:45:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.129 14:45:48 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.129 14:45:48 -- setup/common.sh@32 -- # continue 00:05:15.129 14:45:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.129 14:45:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.129 14:45:48 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.129 14:45:48 -- setup/common.sh@32 -- # continue 00:05:15.129 14:45:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.129 14:45:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.129 14:45:48 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.129 14:45:48 -- setup/common.sh@32 -- # continue 00:05:15.129 14:45:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.129 14:45:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.129 14:45:48 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.129 14:45:48 -- setup/common.sh@33 -- # echo 0 00:05:15.129 14:45:48 -- setup/common.sh@33 -- # return 0 00:05:15.129 14:45:48 -- setup/hugepages.sh@99 -- # surp=0 00:05:15.129 14:45:48 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:05:15.129 14:45:48 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:05:15.129 14:45:48 -- setup/common.sh@18 -- # local node= 00:05:15.129 14:45:48 -- setup/common.sh@19 -- # local var val 00:05:15.129 14:45:48 -- setup/common.sh@20 -- # local mem_f mem 00:05:15.129 14:45:48 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:15.129 14:45:48 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:15.129 14:45:48 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:15.129 14:45:48 -- setup/common.sh@28 -- # mapfile -t mem 00:05:15.129 14:45:48 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:15.129 14:45:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.129 14:45:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.129 14:45:48 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239108 kB' 'MemFree: 6515052 kB' 'MemAvailable: 9444632 kB' 'Buffers: 2684 kB' 'Cached: 3130192 kB' 'SwapCached: 0 kB' 'Active: 497568 kB' 'Inactive: 2753224 kB' 'Active(anon): 128416 kB' 'Inactive(anon): 0 kB' 'Active(file): 369152 kB' 'Inactive(file): 2753224 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 332 kB' 'Writeback: 0 kB' 'AnonPages: 119524 kB' 'Mapped: 50764 kB' 'Shmem: 10488 kB' 'KReclaimable: 88300 kB' 'Slab: 191552 kB' 'SReclaimable: 88300 kB' 'SUnreclaim: 103252 kB' 'KernelStack: 6752 kB' 'PageTables: 4440 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459580 kB' 'Committed_AS: 323252 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55448 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 206700 kB' 'DirectMap2M: 6084608 kB' 'DirectMap1G: 8388608 kB' 00:05:15.129 14:45:48 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:15.129 14:45:48 -- setup/common.sh@32 -- # continue 00:05:15.130 14:45:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.130 14:45:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.130 14:45:48 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:15.130 14:45:48 -- setup/common.sh@32 -- # continue 00:05:15.130 14:45:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.130 14:45:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.130 14:45:48 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:15.130 14:45:48 -- setup/common.sh@32 -- # continue 00:05:15.130 14:45:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.130 14:45:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.130 14:45:48 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:15.130 14:45:48 -- setup/common.sh@32 -- # continue 00:05:15.130 14:45:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.130 14:45:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.130 14:45:48 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:15.130 14:45:48 -- setup/common.sh@32 -- # continue 00:05:15.130 14:45:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.130 14:45:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.130 14:45:48 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:15.130 14:45:48 -- setup/common.sh@32 -- # continue 00:05:15.130 14:45:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.130 14:45:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.130 14:45:48 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:15.130 14:45:48 -- setup/common.sh@32 -- # continue 00:05:15.130 14:45:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.130 14:45:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.130 14:45:48 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:15.130 14:45:48 -- setup/common.sh@32 -- # continue 00:05:15.130 14:45:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.130 14:45:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.130 14:45:48 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:15.130 14:45:48 -- setup/common.sh@32 -- # continue 00:05:15.130 14:45:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.130 14:45:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.130 14:45:48 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:15.130 14:45:48 -- setup/common.sh@32 -- # continue 00:05:15.130 14:45:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.130 14:45:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.130 14:45:48 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:15.130 14:45:48 -- setup/common.sh@32 -- # continue 00:05:15.130 14:45:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.130 14:45:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.130 14:45:48 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:15.130 14:45:48 -- setup/common.sh@32 -- # continue 00:05:15.130 14:45:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.130 14:45:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.130 14:45:48 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:15.130 14:45:48 -- setup/common.sh@32 -- # continue 00:05:15.130 14:45:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.130 14:45:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.130 14:45:48 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:15.130 14:45:48 -- setup/common.sh@32 -- # continue 00:05:15.130 14:45:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.130 14:45:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.130 14:45:48 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:15.130 14:45:48 -- setup/common.sh@32 -- # continue 00:05:15.130 14:45:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.130 14:45:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.130 14:45:48 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:15.130 14:45:48 -- setup/common.sh@32 -- # continue 00:05:15.130 14:45:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.130 14:45:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.130 14:45:48 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:15.130 14:45:48 -- setup/common.sh@32 -- # continue 00:05:15.130 14:45:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.130 14:45:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.130 14:45:48 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:15.130 14:45:48 -- setup/common.sh@32 -- # continue 00:05:15.130 14:45:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.130 14:45:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.130 14:45:48 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:15.130 14:45:48 -- setup/common.sh@32 -- # continue 00:05:15.130 14:45:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.130 14:45:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.130 14:45:48 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:15.130 14:45:48 -- setup/common.sh@32 -- # continue 00:05:15.130 14:45:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.130 14:45:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.130 14:45:48 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:15.130 14:45:48 -- setup/common.sh@32 -- # continue 00:05:15.130 14:45:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.130 14:45:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.130 14:45:48 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:15.130 14:45:48 -- setup/common.sh@32 -- # continue 00:05:15.130 14:45:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.130 14:45:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.130 14:45:48 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:15.130 14:45:48 -- setup/common.sh@32 -- # continue 00:05:15.130 14:45:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.130 14:45:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.130 14:45:48 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:15.130 14:45:48 -- setup/common.sh@32 -- # continue 00:05:15.130 14:45:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.130 14:45:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.130 14:45:48 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:15.130 14:45:48 -- setup/common.sh@32 -- # continue 00:05:15.130 14:45:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.130 14:45:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.130 14:45:48 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:15.130 14:45:48 -- setup/common.sh@32 -- # continue 00:05:15.130 14:45:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.130 14:45:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.130 14:45:48 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:15.130 14:45:48 -- setup/common.sh@32 -- # continue 00:05:15.130 14:45:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.130 14:45:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.130 14:45:48 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:15.130 14:45:48 -- setup/common.sh@32 -- # continue 00:05:15.130 14:45:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.130 14:45:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.130 14:45:48 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:15.130 14:45:48 -- setup/common.sh@32 -- # continue 00:05:15.130 14:45:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.130 14:45:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.130 14:45:48 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:15.130 14:45:48 -- setup/common.sh@32 -- # continue 00:05:15.130 14:45:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.130 14:45:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.130 14:45:48 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:15.130 14:45:48 -- setup/common.sh@32 -- # continue 00:05:15.130 14:45:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.130 14:45:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.130 14:45:48 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:15.130 14:45:48 -- setup/common.sh@32 -- # continue 00:05:15.130 14:45:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.130 14:45:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.130 14:45:48 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:15.130 14:45:48 -- setup/common.sh@32 -- # continue 00:05:15.130 14:45:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.130 14:45:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.130 14:45:48 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:15.130 14:45:48 -- setup/common.sh@32 -- # continue 00:05:15.130 14:45:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.130 14:45:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.130 14:45:48 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:15.130 14:45:48 -- setup/common.sh@32 -- # continue 00:05:15.130 14:45:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.130 14:45:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.130 14:45:48 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:15.130 14:45:48 -- setup/common.sh@32 -- # continue 00:05:15.130 14:45:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.130 14:45:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.130 14:45:48 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:15.130 14:45:48 -- setup/common.sh@32 -- # continue 00:05:15.130 14:45:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.130 14:45:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.130 14:45:48 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:15.130 14:45:48 -- setup/common.sh@32 -- # continue 00:05:15.130 14:45:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.130 14:45:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.130 14:45:48 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:15.130 14:45:48 -- setup/common.sh@32 -- # continue 00:05:15.130 14:45:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.130 14:45:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.130 14:45:48 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:15.130 14:45:48 -- setup/common.sh@32 -- # continue 00:05:15.130 14:45:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.130 14:45:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.130 14:45:48 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:15.130 14:45:48 -- setup/common.sh@32 -- # continue 00:05:15.130 14:45:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.130 14:45:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.130 14:45:48 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:15.131 14:45:48 -- setup/common.sh@32 -- # continue 00:05:15.131 14:45:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.131 14:45:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.131 14:45:48 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:15.131 14:45:48 -- setup/common.sh@32 -- # continue 00:05:15.131 14:45:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.131 14:45:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.131 14:45:48 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:15.131 14:45:48 -- setup/common.sh@32 -- # continue 00:05:15.131 14:45:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.131 14:45:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.131 14:45:48 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:15.131 14:45:48 -- setup/common.sh@32 -- # continue 00:05:15.131 14:45:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.131 14:45:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.131 14:45:48 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:15.131 14:45:48 -- setup/common.sh@32 -- # continue 00:05:15.131 14:45:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.131 14:45:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.131 14:45:48 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:15.131 14:45:48 -- setup/common.sh@32 -- # continue 00:05:15.131 14:45:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.131 14:45:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.131 14:45:48 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:15.131 14:45:48 -- setup/common.sh@32 -- # continue 00:05:15.131 14:45:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.131 14:45:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.131 14:45:48 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:15.131 14:45:48 -- setup/common.sh@32 -- # continue 00:05:15.131 14:45:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.131 14:45:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.131 14:45:48 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:15.131 14:45:48 -- setup/common.sh@32 -- # continue 00:05:15.131 14:45:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.131 14:45:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.131 14:45:48 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:15.131 14:45:48 -- setup/common.sh@33 -- # echo 0 00:05:15.131 14:45:48 -- setup/common.sh@33 -- # return 0 00:05:15.131 14:45:48 -- setup/hugepages.sh@100 -- # resv=0 00:05:15.131 nr_hugepages=1024 00:05:15.131 14:45:48 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:05:15.131 resv_hugepages=0 00:05:15.131 14:45:48 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:05:15.131 surplus_hugepages=0 00:05:15.131 14:45:48 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:05:15.131 anon_hugepages=0 00:05:15.131 14:45:48 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:05:15.131 14:45:48 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:15.131 14:45:48 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:05:15.131 14:45:48 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:05:15.131 14:45:48 -- setup/common.sh@17 -- # local get=HugePages_Total 00:05:15.131 14:45:48 -- setup/common.sh@18 -- # local node= 00:05:15.131 14:45:48 -- setup/common.sh@19 -- # local var val 00:05:15.131 14:45:48 -- setup/common.sh@20 -- # local mem_f mem 00:05:15.131 14:45:48 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:15.131 14:45:48 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:15.131 14:45:48 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:15.131 14:45:48 -- setup/common.sh@28 -- # mapfile -t mem 00:05:15.131 14:45:48 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:15.131 14:45:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.131 14:45:48 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239108 kB' 'MemFree: 6515052 kB' 'MemAvailable: 9444632 kB' 'Buffers: 2684 kB' 'Cached: 3130192 kB' 'SwapCached: 0 kB' 'Active: 497540 kB' 'Inactive: 2753224 kB' 'Active(anon): 128388 kB' 'Inactive(anon): 0 kB' 'Active(file): 369152 kB' 'Inactive(file): 2753224 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 332 kB' 'Writeback: 0 kB' 'AnonPages: 119500 kB' 'Mapped: 50764 kB' 'Shmem: 10488 kB' 'KReclaimable: 88300 kB' 'Slab: 191552 kB' 'SReclaimable: 88300 kB' 'SUnreclaim: 103252 kB' 'KernelStack: 6752 kB' 'PageTables: 4440 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459580 kB' 'Committed_AS: 323252 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55448 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 206700 kB' 'DirectMap2M: 6084608 kB' 'DirectMap1G: 8388608 kB' 00:05:15.131 14:45:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.131 14:45:48 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:15.131 14:45:48 -- setup/common.sh@32 -- # continue 00:05:15.131 14:45:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.131 14:45:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.131 14:45:48 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:15.131 14:45:48 -- setup/common.sh@32 -- # continue 00:05:15.131 14:45:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.131 14:45:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.131 14:45:48 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:15.131 14:45:48 -- setup/common.sh@32 -- # continue 00:05:15.131 14:45:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.131 14:45:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.131 14:45:48 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:15.131 14:45:48 -- setup/common.sh@32 -- # continue 00:05:15.131 14:45:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.131 14:45:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.131 14:45:48 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:15.131 14:45:48 -- setup/common.sh@32 -- # continue 00:05:15.131 14:45:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.131 14:45:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.131 14:45:48 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:15.131 14:45:48 -- setup/common.sh@32 -- # continue 00:05:15.131 14:45:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.131 14:45:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.131 14:45:48 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:15.131 14:45:48 -- setup/common.sh@32 -- # continue 00:05:15.131 14:45:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.131 14:45:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.131 14:45:48 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:15.131 14:45:48 -- setup/common.sh@32 -- # continue 00:05:15.131 14:45:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.131 14:45:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.131 14:45:48 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:15.131 14:45:48 -- setup/common.sh@32 -- # continue 00:05:15.131 14:45:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.131 14:45:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.131 14:45:48 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:15.131 14:45:48 -- setup/common.sh@32 -- # continue 00:05:15.131 14:45:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.131 14:45:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.131 14:45:48 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:15.131 14:45:48 -- setup/common.sh@32 -- # continue 00:05:15.131 14:45:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.131 14:45:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.131 14:45:48 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:15.131 14:45:48 -- setup/common.sh@32 -- # continue 00:05:15.131 14:45:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.131 14:45:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.131 14:45:48 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:15.131 14:45:48 -- setup/common.sh@32 -- # continue 00:05:15.131 14:45:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.131 14:45:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.131 14:45:48 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:15.131 14:45:48 -- setup/common.sh@32 -- # continue 00:05:15.131 14:45:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.131 14:45:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.131 14:45:48 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:15.131 14:45:48 -- setup/common.sh@32 -- # continue 00:05:15.131 14:45:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.131 14:45:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.132 14:45:48 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:15.132 14:45:48 -- setup/common.sh@32 -- # continue 00:05:15.132 14:45:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.132 14:45:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.132 14:45:48 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:15.132 14:45:48 -- setup/common.sh@32 -- # continue 00:05:15.132 14:45:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.132 14:45:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.132 14:45:48 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:15.132 14:45:48 -- setup/common.sh@32 -- # continue 00:05:15.132 14:45:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.132 14:45:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.132 14:45:48 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:15.132 14:45:48 -- setup/common.sh@32 -- # continue 00:05:15.132 14:45:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.132 14:45:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.132 14:45:48 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:15.132 14:45:48 -- setup/common.sh@32 -- # continue 00:05:15.132 14:45:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.132 14:45:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.132 14:45:48 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:15.132 14:45:48 -- setup/common.sh@32 -- # continue 00:05:15.132 14:45:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.132 14:45:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.132 14:45:48 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:15.132 14:45:48 -- setup/common.sh@32 -- # continue 00:05:15.132 14:45:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.132 14:45:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.132 14:45:48 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:15.132 14:45:48 -- setup/common.sh@32 -- # continue 00:05:15.132 14:45:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.132 14:45:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.132 14:45:48 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:15.132 14:45:48 -- setup/common.sh@32 -- # continue 00:05:15.132 14:45:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.132 14:45:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.132 14:45:48 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:15.132 14:45:48 -- setup/common.sh@32 -- # continue 00:05:15.132 14:45:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.132 14:45:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.132 14:45:48 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:15.132 14:45:48 -- setup/common.sh@32 -- # continue 00:05:15.132 14:45:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.132 14:45:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.132 14:45:48 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:15.132 14:45:48 -- setup/common.sh@32 -- # continue 00:05:15.132 14:45:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.132 14:45:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.132 14:45:48 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:15.132 14:45:48 -- setup/common.sh@32 -- # continue 00:05:15.132 14:45:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.132 14:45:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.132 14:45:48 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:15.132 14:45:48 -- setup/common.sh@32 -- # continue 00:05:15.132 14:45:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.132 14:45:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.132 14:45:48 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:15.132 14:45:48 -- setup/common.sh@32 -- # continue 00:05:15.132 14:45:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.132 14:45:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.132 14:45:48 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:15.132 14:45:48 -- setup/common.sh@32 -- # continue 00:05:15.132 14:45:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.132 14:45:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.132 14:45:48 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:15.132 14:45:48 -- setup/common.sh@32 -- # continue 00:05:15.132 14:45:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.132 14:45:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.132 14:45:48 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:15.132 14:45:48 -- setup/common.sh@32 -- # continue 00:05:15.132 14:45:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.132 14:45:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.132 14:45:48 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:15.132 14:45:48 -- setup/common.sh@32 -- # continue 00:05:15.132 14:45:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.132 14:45:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.132 14:45:48 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:15.132 14:45:48 -- setup/common.sh@32 -- # continue 00:05:15.132 14:45:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.132 14:45:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.132 14:45:48 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:15.132 14:45:48 -- setup/common.sh@32 -- # continue 00:05:15.132 14:45:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.132 14:45:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.132 14:45:48 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:15.132 14:45:48 -- setup/common.sh@32 -- # continue 00:05:15.132 14:45:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.132 14:45:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.132 14:45:48 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:15.132 14:45:48 -- setup/common.sh@32 -- # continue 00:05:15.132 14:45:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.132 14:45:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.132 14:45:48 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:15.132 14:45:48 -- setup/common.sh@32 -- # continue 00:05:15.132 14:45:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.132 14:45:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.132 14:45:48 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:15.132 14:45:48 -- setup/common.sh@32 -- # continue 00:05:15.132 14:45:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.132 14:45:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.132 14:45:48 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:15.132 14:45:48 -- setup/common.sh@32 -- # continue 00:05:15.132 14:45:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.132 14:45:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.132 14:45:48 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:15.132 14:45:48 -- setup/common.sh@32 -- # continue 00:05:15.132 14:45:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.132 14:45:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.132 14:45:48 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:15.132 14:45:48 -- setup/common.sh@32 -- # continue 00:05:15.132 14:45:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.132 14:45:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.132 14:45:48 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:15.132 14:45:48 -- setup/common.sh@32 -- # continue 00:05:15.132 14:45:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.132 14:45:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.132 14:45:48 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:15.132 14:45:48 -- setup/common.sh@32 -- # continue 00:05:15.132 14:45:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.132 14:45:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.132 14:45:48 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:15.132 14:45:48 -- setup/common.sh@32 -- # continue 00:05:15.132 14:45:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.132 14:45:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.132 14:45:48 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:15.132 14:45:48 -- setup/common.sh@32 -- # continue 00:05:15.132 14:45:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.132 14:45:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.132 14:45:48 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:15.132 14:45:48 -- setup/common.sh@32 -- # continue 00:05:15.132 14:45:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.132 14:45:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.132 14:45:48 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:15.132 14:45:48 -- setup/common.sh@33 -- # echo 1024 00:05:15.132 14:45:48 -- setup/common.sh@33 -- # return 0 00:05:15.132 14:45:48 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:15.132 14:45:48 -- setup/hugepages.sh@112 -- # get_nodes 00:05:15.132 14:45:48 -- setup/hugepages.sh@27 -- # local node 00:05:15.132 14:45:48 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:15.132 14:45:48 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:05:15.132 14:45:48 -- setup/hugepages.sh@32 -- # no_nodes=1 00:05:15.132 14:45:48 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:05:15.132 14:45:48 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:05:15.132 14:45:48 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:05:15.132 14:45:48 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:05:15.132 14:45:48 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:15.132 14:45:48 -- setup/common.sh@18 -- # local node=0 00:05:15.132 14:45:48 -- setup/common.sh@19 -- # local var val 00:05:15.132 14:45:48 -- setup/common.sh@20 -- # local mem_f mem 00:05:15.132 14:45:48 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:15.132 14:45:48 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:05:15.132 14:45:48 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:05:15.132 14:45:48 -- setup/common.sh@28 -- # mapfile -t mem 00:05:15.132 14:45:48 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:15.132 14:45:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.132 14:45:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.133 14:45:48 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239108 kB' 'MemFree: 6515052 kB' 'MemUsed: 5724056 kB' 'SwapCached: 0 kB' 'Active: 497552 kB' 'Inactive: 2753224 kB' 'Active(anon): 128400 kB' 'Inactive(anon): 0 kB' 'Active(file): 369152 kB' 'Inactive(file): 2753224 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 332 kB' 'Writeback: 0 kB' 'FilePages: 3132876 kB' 'Mapped: 50764 kB' 'AnonPages: 119512 kB' 'Shmem: 10488 kB' 'KernelStack: 6752 kB' 'PageTables: 4440 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 88300 kB' 'Slab: 191552 kB' 'SReclaimable: 88300 kB' 'SUnreclaim: 103252 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:05:15.133 14:45:48 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.133 14:45:48 -- setup/common.sh@32 -- # continue 00:05:15.133 14:45:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.133 14:45:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.133 14:45:48 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.133 14:45:48 -- setup/common.sh@32 -- # continue 00:05:15.133 14:45:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.133 14:45:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.133 14:45:48 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.133 14:45:48 -- setup/common.sh@32 -- # continue 00:05:15.133 14:45:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.133 14:45:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.133 14:45:48 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.133 14:45:48 -- setup/common.sh@32 -- # continue 00:05:15.133 14:45:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.133 14:45:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.133 14:45:48 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.133 14:45:48 -- setup/common.sh@32 -- # continue 00:05:15.133 14:45:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.133 14:45:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.133 14:45:48 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.133 14:45:48 -- setup/common.sh@32 -- # continue 00:05:15.133 14:45:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.133 14:45:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.133 14:45:48 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.133 14:45:48 -- setup/common.sh@32 -- # continue 00:05:15.133 14:45:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.133 14:45:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.133 14:45:48 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.133 14:45:48 -- setup/common.sh@32 -- # continue 00:05:15.133 14:45:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.133 14:45:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.133 14:45:48 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.133 14:45:48 -- setup/common.sh@32 -- # continue 00:05:15.133 14:45:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.133 14:45:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.133 14:45:48 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.133 14:45:48 -- setup/common.sh@32 -- # continue 00:05:15.133 14:45:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.133 14:45:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.133 14:45:48 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.133 14:45:48 -- setup/common.sh@32 -- # continue 00:05:15.133 14:45:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.133 14:45:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.133 14:45:48 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.133 14:45:48 -- setup/common.sh@32 -- # continue 00:05:15.133 14:45:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.133 14:45:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.133 14:45:48 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.133 14:45:48 -- setup/common.sh@32 -- # continue 00:05:15.133 14:45:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.133 14:45:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.133 14:45:48 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.133 14:45:48 -- setup/common.sh@32 -- # continue 00:05:15.133 14:45:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.133 14:45:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.133 14:45:48 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.133 14:45:48 -- setup/common.sh@32 -- # continue 00:05:15.133 14:45:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.133 14:45:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.133 14:45:48 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.133 14:45:48 -- setup/common.sh@32 -- # continue 00:05:15.133 14:45:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.133 14:45:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.133 14:45:48 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.133 14:45:48 -- setup/common.sh@32 -- # continue 00:05:15.133 14:45:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.133 14:45:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.133 14:45:48 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.133 14:45:48 -- setup/common.sh@32 -- # continue 00:05:15.133 14:45:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.133 14:45:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.133 14:45:48 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.133 14:45:48 -- setup/common.sh@32 -- # continue 00:05:15.133 14:45:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.133 14:45:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.133 14:45:48 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.133 14:45:48 -- setup/common.sh@32 -- # continue 00:05:15.133 14:45:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.133 14:45:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.133 14:45:48 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.133 14:45:48 -- setup/common.sh@32 -- # continue 00:05:15.133 14:45:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.133 14:45:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.133 14:45:48 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.133 14:45:48 -- setup/common.sh@32 -- # continue 00:05:15.133 14:45:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.133 14:45:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.133 14:45:48 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.133 14:45:48 -- setup/common.sh@32 -- # continue 00:05:15.133 14:45:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.133 14:45:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.133 14:45:48 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.133 14:45:48 -- setup/common.sh@32 -- # continue 00:05:15.133 14:45:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.133 14:45:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.133 14:45:48 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.133 14:45:48 -- setup/common.sh@32 -- # continue 00:05:15.133 14:45:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.133 14:45:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.133 14:45:48 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.133 14:45:48 -- setup/common.sh@32 -- # continue 00:05:15.133 14:45:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.133 14:45:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.133 14:45:48 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.133 14:45:48 -- setup/common.sh@32 -- # continue 00:05:15.133 14:45:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.133 14:45:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.133 14:45:48 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.133 14:45:48 -- setup/common.sh@32 -- # continue 00:05:15.133 14:45:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.133 14:45:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.133 14:45:48 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.133 14:45:48 -- setup/common.sh@32 -- # continue 00:05:15.133 14:45:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.133 14:45:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.133 14:45:48 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.133 14:45:48 -- setup/common.sh@32 -- # continue 00:05:15.133 14:45:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.133 14:45:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.133 14:45:48 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.133 14:45:48 -- setup/common.sh@32 -- # continue 00:05:15.133 14:45:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.133 14:45:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.133 14:45:48 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.133 14:45:48 -- setup/common.sh@32 -- # continue 00:05:15.133 14:45:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.133 14:45:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.133 14:45:48 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.133 14:45:48 -- setup/common.sh@32 -- # continue 00:05:15.133 14:45:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.133 14:45:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.133 14:45:48 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.133 14:45:48 -- setup/common.sh@32 -- # continue 00:05:15.133 14:45:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.133 14:45:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.133 14:45:48 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.133 14:45:48 -- setup/common.sh@32 -- # continue 00:05:15.133 14:45:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.133 14:45:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.133 14:45:48 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.133 14:45:48 -- setup/common.sh@32 -- # continue 00:05:15.133 14:45:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.133 14:45:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.133 14:45:48 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.133 14:45:48 -- setup/common.sh@33 -- # echo 0 00:05:15.133 14:45:48 -- setup/common.sh@33 -- # return 0 00:05:15.133 14:45:48 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:05:15.133 14:45:48 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:05:15.133 14:45:48 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:05:15.133 14:45:48 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:05:15.133 node0=1024 expecting 1024 00:05:15.133 14:45:48 -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:05:15.133 14:45:48 -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:05:15.133 00:05:15.133 real 0m1.048s 00:05:15.133 user 0m0.490s 00:05:15.133 sys 0m0.494s 00:05:15.133 14:45:48 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:15.133 14:45:48 -- common/autotest_common.sh@10 -- # set +x 00:05:15.133 ************************************ 00:05:15.134 END TEST default_setup 00:05:15.134 ************************************ 00:05:15.393 14:45:48 -- setup/hugepages.sh@211 -- # run_test per_node_1G_alloc per_node_1G_alloc 00:05:15.393 14:45:48 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:15.393 14:45:48 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:15.393 14:45:48 -- common/autotest_common.sh@10 -- # set +x 00:05:15.393 ************************************ 00:05:15.393 START TEST per_node_1G_alloc 00:05:15.393 ************************************ 00:05:15.393 14:45:48 -- common/autotest_common.sh@1114 -- # per_node_1G_alloc 00:05:15.393 14:45:48 -- setup/hugepages.sh@143 -- # local IFS=, 00:05:15.393 14:45:48 -- setup/hugepages.sh@145 -- # get_test_nr_hugepages 1048576 0 00:05:15.393 14:45:48 -- setup/hugepages.sh@49 -- # local size=1048576 00:05:15.393 14:45:48 -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:05:15.393 14:45:48 -- setup/hugepages.sh@51 -- # shift 00:05:15.393 14:45:48 -- setup/hugepages.sh@52 -- # node_ids=('0') 00:05:15.393 14:45:48 -- setup/hugepages.sh@52 -- # local node_ids 00:05:15.393 14:45:48 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:05:15.393 14:45:48 -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:05:15.393 14:45:48 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:05:15.393 14:45:48 -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:05:15.393 14:45:48 -- setup/hugepages.sh@62 -- # local user_nodes 00:05:15.393 14:45:48 -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:05:15.393 14:45:48 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:05:15.393 14:45:48 -- setup/hugepages.sh@67 -- # nodes_test=() 00:05:15.393 14:45:48 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:05:15.393 14:45:48 -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:05:15.393 14:45:48 -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:05:15.393 14:45:48 -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:05:15.393 14:45:48 -- setup/hugepages.sh@73 -- # return 0 00:05:15.393 14:45:48 -- setup/hugepages.sh@146 -- # NRHUGE=512 00:05:15.393 14:45:48 -- setup/hugepages.sh@146 -- # HUGENODE=0 00:05:15.393 14:45:48 -- setup/hugepages.sh@146 -- # setup output 00:05:15.393 14:45:48 -- setup/common.sh@9 -- # [[ output == output ]] 00:05:15.393 14:45:48 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:15.654 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:15.654 0000:00:06.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:15.654 0000:00:07.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:15.654 14:45:48 -- setup/hugepages.sh@147 -- # nr_hugepages=512 00:05:15.654 14:45:48 -- setup/hugepages.sh@147 -- # verify_nr_hugepages 00:05:15.654 14:45:48 -- setup/hugepages.sh@89 -- # local node 00:05:15.654 14:45:48 -- setup/hugepages.sh@90 -- # local sorted_t 00:05:15.654 14:45:48 -- setup/hugepages.sh@91 -- # local sorted_s 00:05:15.654 14:45:48 -- setup/hugepages.sh@92 -- # local surp 00:05:15.654 14:45:48 -- setup/hugepages.sh@93 -- # local resv 00:05:15.654 14:45:48 -- setup/hugepages.sh@94 -- # local anon 00:05:15.654 14:45:48 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:05:15.654 14:45:48 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:05:15.654 14:45:48 -- setup/common.sh@17 -- # local get=AnonHugePages 00:05:15.654 14:45:48 -- setup/common.sh@18 -- # local node= 00:05:15.654 14:45:48 -- setup/common.sh@19 -- # local var val 00:05:15.654 14:45:48 -- setup/common.sh@20 -- # local mem_f mem 00:05:15.654 14:45:48 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:15.654 14:45:48 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:15.654 14:45:48 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:15.654 14:45:48 -- setup/common.sh@28 -- # mapfile -t mem 00:05:15.654 14:45:48 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:15.654 14:45:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.654 14:45:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.654 14:45:48 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239108 kB' 'MemFree: 7559320 kB' 'MemAvailable: 10488916 kB' 'Buffers: 2684 kB' 'Cached: 3130196 kB' 'SwapCached: 0 kB' 'Active: 498108 kB' 'Inactive: 2753240 kB' 'Active(anon): 128956 kB' 'Inactive(anon): 0 kB' 'Active(file): 369152 kB' 'Inactive(file): 2753240 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 332 kB' 'Writeback: 0 kB' 'AnonPages: 119848 kB' 'Mapped: 50852 kB' 'Shmem: 10488 kB' 'KReclaimable: 88300 kB' 'Slab: 191580 kB' 'SReclaimable: 88300 kB' 'SUnreclaim: 103280 kB' 'KernelStack: 6776 kB' 'PageTables: 4404 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13983868 kB' 'Committed_AS: 323252 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55496 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 206700 kB' 'DirectMap2M: 6084608 kB' 'DirectMap1G: 8388608 kB' 00:05:15.654 14:45:48 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:15.654 14:45:48 -- setup/common.sh@32 -- # continue 00:05:15.654 14:45:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.654 14:45:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.654 14:45:48 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:15.654 14:45:48 -- setup/common.sh@32 -- # continue 00:05:15.654 14:45:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.654 14:45:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.654 14:45:48 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:15.654 14:45:48 -- setup/common.sh@32 -- # continue 00:05:15.654 14:45:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.654 14:45:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.654 14:45:48 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:15.654 14:45:48 -- setup/common.sh@32 -- # continue 00:05:15.654 14:45:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.654 14:45:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.654 14:45:48 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:15.654 14:45:48 -- setup/common.sh@32 -- # continue 00:05:15.654 14:45:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.654 14:45:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.654 14:45:48 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:15.654 14:45:48 -- setup/common.sh@32 -- # continue 00:05:15.654 14:45:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.654 14:45:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.654 14:45:48 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:15.654 14:45:48 -- setup/common.sh@32 -- # continue 00:05:15.654 14:45:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.654 14:45:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.654 14:45:48 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:15.654 14:45:48 -- setup/common.sh@32 -- # continue 00:05:15.654 14:45:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.654 14:45:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.654 14:45:48 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:15.654 14:45:48 -- setup/common.sh@32 -- # continue 00:05:15.654 14:45:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.654 14:45:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.654 14:45:48 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:15.654 14:45:48 -- setup/common.sh@32 -- # continue 00:05:15.654 14:45:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.654 14:45:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.654 14:45:48 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:15.654 14:45:48 -- setup/common.sh@32 -- # continue 00:05:15.654 14:45:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.654 14:45:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.654 14:45:48 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:15.654 14:45:48 -- setup/common.sh@32 -- # continue 00:05:15.654 14:45:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.654 14:45:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.654 14:45:48 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:15.654 14:45:48 -- setup/common.sh@32 -- # continue 00:05:15.654 14:45:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.654 14:45:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.654 14:45:48 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:15.654 14:45:48 -- setup/common.sh@32 -- # continue 00:05:15.654 14:45:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.654 14:45:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.654 14:45:48 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:15.654 14:45:48 -- setup/common.sh@32 -- # continue 00:05:15.654 14:45:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.654 14:45:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.654 14:45:48 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:15.654 14:45:48 -- setup/common.sh@32 -- # continue 00:05:15.654 14:45:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.654 14:45:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.654 14:45:48 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:15.654 14:45:48 -- setup/common.sh@32 -- # continue 00:05:15.654 14:45:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.654 14:45:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.654 14:45:48 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:15.654 14:45:48 -- setup/common.sh@32 -- # continue 00:05:15.654 14:45:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.654 14:45:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.654 14:45:48 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:15.654 14:45:48 -- setup/common.sh@32 -- # continue 00:05:15.654 14:45:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.654 14:45:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.654 14:45:48 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:15.654 14:45:48 -- setup/common.sh@32 -- # continue 00:05:15.654 14:45:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.654 14:45:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.654 14:45:48 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:15.654 14:45:48 -- setup/common.sh@32 -- # continue 00:05:15.654 14:45:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.654 14:45:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.654 14:45:48 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:15.654 14:45:48 -- setup/common.sh@32 -- # continue 00:05:15.654 14:45:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.654 14:45:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.654 14:45:48 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:15.654 14:45:48 -- setup/common.sh@32 -- # continue 00:05:15.654 14:45:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.654 14:45:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.654 14:45:48 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:15.654 14:45:48 -- setup/common.sh@32 -- # continue 00:05:15.654 14:45:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.654 14:45:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.655 14:45:48 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:15.655 14:45:48 -- setup/common.sh@32 -- # continue 00:05:15.655 14:45:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.655 14:45:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.655 14:45:48 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:15.655 14:45:48 -- setup/common.sh@32 -- # continue 00:05:15.655 14:45:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.655 14:45:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.655 14:45:48 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:15.655 14:45:48 -- setup/common.sh@32 -- # continue 00:05:15.655 14:45:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.655 14:45:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.655 14:45:48 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:15.655 14:45:48 -- setup/common.sh@32 -- # continue 00:05:15.655 14:45:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.655 14:45:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.655 14:45:48 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:15.655 14:45:48 -- setup/common.sh@32 -- # continue 00:05:15.655 14:45:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.655 14:45:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.655 14:45:48 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:15.655 14:45:48 -- setup/common.sh@32 -- # continue 00:05:15.655 14:45:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.655 14:45:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.655 14:45:48 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:15.655 14:45:48 -- setup/common.sh@32 -- # continue 00:05:15.655 14:45:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.655 14:45:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.655 14:45:48 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:15.655 14:45:48 -- setup/common.sh@32 -- # continue 00:05:15.655 14:45:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.655 14:45:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.655 14:45:48 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:15.655 14:45:48 -- setup/common.sh@32 -- # continue 00:05:15.655 14:45:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.655 14:45:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.655 14:45:48 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:15.655 14:45:48 -- setup/common.sh@32 -- # continue 00:05:15.655 14:45:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.655 14:45:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.655 14:45:48 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:15.655 14:45:48 -- setup/common.sh@32 -- # continue 00:05:15.655 14:45:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.655 14:45:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.655 14:45:48 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:15.655 14:45:48 -- setup/common.sh@32 -- # continue 00:05:15.655 14:45:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.655 14:45:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.655 14:45:48 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:15.655 14:45:48 -- setup/common.sh@32 -- # continue 00:05:15.655 14:45:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.655 14:45:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.655 14:45:48 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:15.655 14:45:48 -- setup/common.sh@32 -- # continue 00:05:15.655 14:45:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.655 14:45:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.655 14:45:48 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:15.655 14:45:48 -- setup/common.sh@32 -- # continue 00:05:15.655 14:45:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.655 14:45:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.655 14:45:48 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:15.655 14:45:48 -- setup/common.sh@32 -- # continue 00:05:15.655 14:45:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.655 14:45:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.655 14:45:48 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:15.655 14:45:48 -- setup/common.sh@33 -- # echo 0 00:05:15.655 14:45:48 -- setup/common.sh@33 -- # return 0 00:05:15.655 14:45:48 -- setup/hugepages.sh@97 -- # anon=0 00:05:15.655 14:45:48 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:05:15.655 14:45:48 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:15.655 14:45:48 -- setup/common.sh@18 -- # local node= 00:05:15.655 14:45:48 -- setup/common.sh@19 -- # local var val 00:05:15.655 14:45:48 -- setup/common.sh@20 -- # local mem_f mem 00:05:15.655 14:45:48 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:15.655 14:45:48 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:15.655 14:45:48 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:15.655 14:45:48 -- setup/common.sh@28 -- # mapfile -t mem 00:05:15.655 14:45:48 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:15.655 14:45:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.655 14:45:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.655 14:45:48 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239108 kB' 'MemFree: 7559320 kB' 'MemAvailable: 10488916 kB' 'Buffers: 2684 kB' 'Cached: 3130196 kB' 'SwapCached: 0 kB' 'Active: 497940 kB' 'Inactive: 2753240 kB' 'Active(anon): 128788 kB' 'Inactive(anon): 0 kB' 'Active(file): 369152 kB' 'Inactive(file): 2753240 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 332 kB' 'Writeback: 0 kB' 'AnonPages: 119892 kB' 'Mapped: 50852 kB' 'Shmem: 10488 kB' 'KReclaimable: 88300 kB' 'Slab: 191580 kB' 'SReclaimable: 88300 kB' 'SUnreclaim: 103280 kB' 'KernelStack: 6744 kB' 'PageTables: 4316 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13983868 kB' 'Committed_AS: 323252 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55480 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 206700 kB' 'DirectMap2M: 6084608 kB' 'DirectMap1G: 8388608 kB' 00:05:15.655 14:45:48 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.655 14:45:48 -- setup/common.sh@32 -- # continue 00:05:15.655 14:45:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.655 14:45:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.655 14:45:48 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.655 14:45:48 -- setup/common.sh@32 -- # continue 00:05:15.655 14:45:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.655 14:45:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.655 14:45:48 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.655 14:45:48 -- setup/common.sh@32 -- # continue 00:05:15.655 14:45:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.655 14:45:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.655 14:45:48 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.655 14:45:48 -- setup/common.sh@32 -- # continue 00:05:15.655 14:45:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.655 14:45:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.655 14:45:48 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.655 14:45:48 -- setup/common.sh@32 -- # continue 00:05:15.655 14:45:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.655 14:45:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.655 14:45:48 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.655 14:45:48 -- setup/common.sh@32 -- # continue 00:05:15.655 14:45:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.655 14:45:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.655 14:45:48 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.655 14:45:48 -- setup/common.sh@32 -- # continue 00:05:15.655 14:45:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.655 14:45:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.655 14:45:48 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.655 14:45:48 -- setup/common.sh@32 -- # continue 00:05:15.655 14:45:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.655 14:45:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.655 14:45:48 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.655 14:45:48 -- setup/common.sh@32 -- # continue 00:05:15.655 14:45:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.655 14:45:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.655 14:45:48 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.655 14:45:48 -- setup/common.sh@32 -- # continue 00:05:15.655 14:45:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.655 14:45:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.655 14:45:48 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.655 14:45:48 -- setup/common.sh@32 -- # continue 00:05:15.655 14:45:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.655 14:45:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.655 14:45:48 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.655 14:45:48 -- setup/common.sh@32 -- # continue 00:05:15.655 14:45:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.655 14:45:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.655 14:45:48 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.655 14:45:48 -- setup/common.sh@32 -- # continue 00:05:15.655 14:45:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.655 14:45:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.655 14:45:48 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.655 14:45:48 -- setup/common.sh@32 -- # continue 00:05:15.655 14:45:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.655 14:45:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.655 14:45:48 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.655 14:45:48 -- setup/common.sh@32 -- # continue 00:05:15.655 14:45:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.655 14:45:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.655 14:45:48 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.656 14:45:48 -- setup/common.sh@32 -- # continue 00:05:15.656 14:45:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.656 14:45:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.656 14:45:48 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.656 14:45:48 -- setup/common.sh@32 -- # continue 00:05:15.656 14:45:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.656 14:45:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.656 14:45:48 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.656 14:45:48 -- setup/common.sh@32 -- # continue 00:05:15.656 14:45:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.656 14:45:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.656 14:45:48 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.656 14:45:48 -- setup/common.sh@32 -- # continue 00:05:15.656 14:45:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.656 14:45:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.656 14:45:48 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.656 14:45:48 -- setup/common.sh@32 -- # continue 00:05:15.656 14:45:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.656 14:45:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.656 14:45:48 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.656 14:45:48 -- setup/common.sh@32 -- # continue 00:05:15.656 14:45:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.656 14:45:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.656 14:45:48 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.656 14:45:48 -- setup/common.sh@32 -- # continue 00:05:15.656 14:45:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.656 14:45:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.656 14:45:48 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.656 14:45:48 -- setup/common.sh@32 -- # continue 00:05:15.656 14:45:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.656 14:45:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.656 14:45:48 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.656 14:45:48 -- setup/common.sh@32 -- # continue 00:05:15.656 14:45:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.656 14:45:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.656 14:45:48 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.656 14:45:48 -- setup/common.sh@32 -- # continue 00:05:15.656 14:45:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.656 14:45:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.656 14:45:48 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.656 14:45:48 -- setup/common.sh@32 -- # continue 00:05:15.656 14:45:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.656 14:45:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.656 14:45:48 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.656 14:45:48 -- setup/common.sh@32 -- # continue 00:05:15.656 14:45:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.656 14:45:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.656 14:45:48 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.656 14:45:48 -- setup/common.sh@32 -- # continue 00:05:15.656 14:45:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.656 14:45:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.656 14:45:48 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.656 14:45:48 -- setup/common.sh@32 -- # continue 00:05:15.656 14:45:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.656 14:45:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.656 14:45:48 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.656 14:45:48 -- setup/common.sh@32 -- # continue 00:05:15.656 14:45:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.656 14:45:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.656 14:45:48 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.656 14:45:48 -- setup/common.sh@32 -- # continue 00:05:15.656 14:45:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.656 14:45:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.656 14:45:48 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.656 14:45:48 -- setup/common.sh@32 -- # continue 00:05:15.656 14:45:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.656 14:45:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.656 14:45:48 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.656 14:45:48 -- setup/common.sh@32 -- # continue 00:05:15.656 14:45:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.656 14:45:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.656 14:45:48 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.656 14:45:48 -- setup/common.sh@32 -- # continue 00:05:15.656 14:45:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.656 14:45:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.656 14:45:48 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.656 14:45:48 -- setup/common.sh@32 -- # continue 00:05:15.656 14:45:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.656 14:45:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.656 14:45:48 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.656 14:45:48 -- setup/common.sh@32 -- # continue 00:05:15.656 14:45:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.656 14:45:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.656 14:45:48 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.656 14:45:48 -- setup/common.sh@32 -- # continue 00:05:15.656 14:45:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.656 14:45:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.656 14:45:48 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.656 14:45:48 -- setup/common.sh@32 -- # continue 00:05:15.656 14:45:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.656 14:45:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.656 14:45:48 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.656 14:45:48 -- setup/common.sh@32 -- # continue 00:05:15.656 14:45:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.656 14:45:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.656 14:45:48 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.656 14:45:48 -- setup/common.sh@32 -- # continue 00:05:15.656 14:45:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.656 14:45:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.656 14:45:48 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.656 14:45:48 -- setup/common.sh@32 -- # continue 00:05:15.656 14:45:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.656 14:45:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.656 14:45:48 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.656 14:45:48 -- setup/common.sh@32 -- # continue 00:05:15.656 14:45:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.656 14:45:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.656 14:45:48 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.656 14:45:48 -- setup/common.sh@32 -- # continue 00:05:15.656 14:45:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.656 14:45:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.656 14:45:48 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.656 14:45:48 -- setup/common.sh@32 -- # continue 00:05:15.656 14:45:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.656 14:45:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.656 14:45:48 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.656 14:45:48 -- setup/common.sh@32 -- # continue 00:05:15.656 14:45:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.656 14:45:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.656 14:45:48 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.656 14:45:48 -- setup/common.sh@32 -- # continue 00:05:15.656 14:45:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.656 14:45:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.656 14:45:48 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.656 14:45:48 -- setup/common.sh@32 -- # continue 00:05:15.656 14:45:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.656 14:45:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.656 14:45:48 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.656 14:45:48 -- setup/common.sh@32 -- # continue 00:05:15.656 14:45:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.656 14:45:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.656 14:45:48 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.656 14:45:48 -- setup/common.sh@32 -- # continue 00:05:15.656 14:45:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.656 14:45:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.656 14:45:48 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.656 14:45:48 -- setup/common.sh@32 -- # continue 00:05:15.656 14:45:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.656 14:45:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.656 14:45:48 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.656 14:45:48 -- setup/common.sh@32 -- # continue 00:05:15.656 14:45:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.656 14:45:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.656 14:45:48 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.656 14:45:48 -- setup/common.sh@33 -- # echo 0 00:05:15.656 14:45:48 -- setup/common.sh@33 -- # return 0 00:05:15.656 14:45:48 -- setup/hugepages.sh@99 -- # surp=0 00:05:15.656 14:45:48 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:05:15.656 14:45:48 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:05:15.656 14:45:48 -- setup/common.sh@18 -- # local node= 00:05:15.656 14:45:48 -- setup/common.sh@19 -- # local var val 00:05:15.656 14:45:48 -- setup/common.sh@20 -- # local mem_f mem 00:05:15.656 14:45:48 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:15.656 14:45:48 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:15.656 14:45:48 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:15.656 14:45:48 -- setup/common.sh@28 -- # mapfile -t mem 00:05:15.656 14:45:48 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:15.656 14:45:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.656 14:45:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.918 14:45:48 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239108 kB' 'MemFree: 7559320 kB' 'MemAvailable: 10488916 kB' 'Buffers: 2684 kB' 'Cached: 3130196 kB' 'SwapCached: 0 kB' 'Active: 497708 kB' 'Inactive: 2753240 kB' 'Active(anon): 128556 kB' 'Inactive(anon): 0 kB' 'Active(file): 369152 kB' 'Inactive(file): 2753240 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 332 kB' 'Writeback: 0 kB' 'AnonPages: 119648 kB' 'Mapped: 50764 kB' 'Shmem: 10488 kB' 'KReclaimable: 88300 kB' 'Slab: 191584 kB' 'SReclaimable: 88300 kB' 'SUnreclaim: 103284 kB' 'KernelStack: 6768 kB' 'PageTables: 4496 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13983868 kB' 'Committed_AS: 323252 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55480 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 206700 kB' 'DirectMap2M: 6084608 kB' 'DirectMap1G: 8388608 kB' 00:05:15.918 14:45:48 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:15.918 14:45:48 -- setup/common.sh@32 -- # continue 00:05:15.918 14:45:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.918 14:45:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.918 14:45:48 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:15.918 14:45:48 -- setup/common.sh@32 -- # continue 00:05:15.918 14:45:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.918 14:45:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.918 14:45:48 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:15.918 14:45:48 -- setup/common.sh@32 -- # continue 00:05:15.918 14:45:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.918 14:45:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.918 14:45:48 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:15.918 14:45:48 -- setup/common.sh@32 -- # continue 00:05:15.918 14:45:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.918 14:45:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.918 14:45:48 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:15.918 14:45:48 -- setup/common.sh@32 -- # continue 00:05:15.918 14:45:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.918 14:45:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.918 14:45:48 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:15.918 14:45:48 -- setup/common.sh@32 -- # continue 00:05:15.918 14:45:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.918 14:45:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.918 14:45:48 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:15.918 14:45:48 -- setup/common.sh@32 -- # continue 00:05:15.918 14:45:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.918 14:45:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.918 14:45:48 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:15.918 14:45:48 -- setup/common.sh@32 -- # continue 00:05:15.918 14:45:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.918 14:45:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.918 14:45:48 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:15.918 14:45:48 -- setup/common.sh@32 -- # continue 00:05:15.918 14:45:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.918 14:45:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.918 14:45:48 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:15.918 14:45:48 -- setup/common.sh@32 -- # continue 00:05:15.918 14:45:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.918 14:45:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.918 14:45:48 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:15.918 14:45:48 -- setup/common.sh@32 -- # continue 00:05:15.918 14:45:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.918 14:45:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.918 14:45:48 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:15.918 14:45:48 -- setup/common.sh@32 -- # continue 00:05:15.918 14:45:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.918 14:45:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.918 14:45:48 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:15.918 14:45:48 -- setup/common.sh@32 -- # continue 00:05:15.918 14:45:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.918 14:45:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.918 14:45:48 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:15.918 14:45:48 -- setup/common.sh@32 -- # continue 00:05:15.918 14:45:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.918 14:45:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.918 14:45:48 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:15.918 14:45:48 -- setup/common.sh@32 -- # continue 00:05:15.918 14:45:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.918 14:45:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.918 14:45:48 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:15.918 14:45:48 -- setup/common.sh@32 -- # continue 00:05:15.918 14:45:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.918 14:45:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.918 14:45:48 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:15.918 14:45:48 -- setup/common.sh@32 -- # continue 00:05:15.918 14:45:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.918 14:45:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.918 14:45:48 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:15.918 14:45:48 -- setup/common.sh@32 -- # continue 00:05:15.918 14:45:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.918 14:45:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.918 14:45:48 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:15.918 14:45:48 -- setup/common.sh@32 -- # continue 00:05:15.918 14:45:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.918 14:45:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.918 14:45:48 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:15.918 14:45:48 -- setup/common.sh@32 -- # continue 00:05:15.918 14:45:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.918 14:45:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.918 14:45:48 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:15.918 14:45:48 -- setup/common.sh@32 -- # continue 00:05:15.918 14:45:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.918 14:45:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.918 14:45:48 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:15.918 14:45:48 -- setup/common.sh@32 -- # continue 00:05:15.918 14:45:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.918 14:45:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.918 14:45:48 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:15.918 14:45:48 -- setup/common.sh@32 -- # continue 00:05:15.918 14:45:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.918 14:45:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.918 14:45:48 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:15.918 14:45:48 -- setup/common.sh@32 -- # continue 00:05:15.918 14:45:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.918 14:45:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.918 14:45:48 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:15.918 14:45:48 -- setup/common.sh@32 -- # continue 00:05:15.918 14:45:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.918 14:45:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.918 14:45:48 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:15.918 14:45:48 -- setup/common.sh@32 -- # continue 00:05:15.918 14:45:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.918 14:45:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.918 14:45:48 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:15.918 14:45:48 -- setup/common.sh@32 -- # continue 00:05:15.918 14:45:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.918 14:45:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.918 14:45:48 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:15.918 14:45:48 -- setup/common.sh@32 -- # continue 00:05:15.918 14:45:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.918 14:45:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.918 14:45:48 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:15.918 14:45:48 -- setup/common.sh@32 -- # continue 00:05:15.918 14:45:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.918 14:45:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.918 14:45:48 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:15.918 14:45:48 -- setup/common.sh@32 -- # continue 00:05:15.918 14:45:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.918 14:45:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.918 14:45:48 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:15.918 14:45:48 -- setup/common.sh@32 -- # continue 00:05:15.918 14:45:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.918 14:45:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.918 14:45:48 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:15.918 14:45:48 -- setup/common.sh@32 -- # continue 00:05:15.918 14:45:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.918 14:45:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.918 14:45:48 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:15.918 14:45:48 -- setup/common.sh@32 -- # continue 00:05:15.918 14:45:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.918 14:45:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.918 14:45:48 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:15.919 14:45:48 -- setup/common.sh@32 -- # continue 00:05:15.919 14:45:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.919 14:45:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.919 14:45:48 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:15.919 14:45:48 -- setup/common.sh@32 -- # continue 00:05:15.919 14:45:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.919 14:45:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.919 14:45:48 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:15.919 14:45:48 -- setup/common.sh@32 -- # continue 00:05:15.919 14:45:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.919 14:45:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.919 14:45:48 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:15.919 14:45:48 -- setup/common.sh@32 -- # continue 00:05:15.919 14:45:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.919 14:45:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.919 14:45:48 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:15.919 14:45:48 -- setup/common.sh@32 -- # continue 00:05:15.919 14:45:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.919 14:45:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.919 14:45:48 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:15.919 14:45:48 -- setup/common.sh@32 -- # continue 00:05:15.919 14:45:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.919 14:45:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.919 14:45:48 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:15.919 14:45:48 -- setup/common.sh@32 -- # continue 00:05:15.919 14:45:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.919 14:45:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.919 14:45:48 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:15.919 14:45:48 -- setup/common.sh@32 -- # continue 00:05:15.919 14:45:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.919 14:45:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.919 14:45:48 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:15.919 14:45:48 -- setup/common.sh@32 -- # continue 00:05:15.919 14:45:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.919 14:45:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.919 14:45:48 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:15.919 14:45:48 -- setup/common.sh@32 -- # continue 00:05:15.919 14:45:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.919 14:45:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.919 14:45:48 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:15.919 14:45:48 -- setup/common.sh@32 -- # continue 00:05:15.919 14:45:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.919 14:45:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.919 14:45:48 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:15.919 14:45:48 -- setup/common.sh@32 -- # continue 00:05:15.919 14:45:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.919 14:45:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.919 14:45:48 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:15.919 14:45:48 -- setup/common.sh@32 -- # continue 00:05:15.919 14:45:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.919 14:45:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.919 14:45:48 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:15.919 14:45:48 -- setup/common.sh@32 -- # continue 00:05:15.919 14:45:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.919 14:45:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.919 14:45:48 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:15.919 14:45:48 -- setup/common.sh@32 -- # continue 00:05:15.919 14:45:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.919 14:45:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.919 14:45:48 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:15.919 14:45:48 -- setup/common.sh@32 -- # continue 00:05:15.919 14:45:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.919 14:45:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.919 14:45:48 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:15.919 14:45:48 -- setup/common.sh@32 -- # continue 00:05:15.919 14:45:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.919 14:45:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.919 14:45:48 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:15.919 14:45:48 -- setup/common.sh@33 -- # echo 0 00:05:15.919 14:45:48 -- setup/common.sh@33 -- # return 0 00:05:15.919 14:45:48 -- setup/hugepages.sh@100 -- # resv=0 00:05:15.919 nr_hugepages=512 00:05:15.919 14:45:48 -- setup/hugepages.sh@102 -- # echo nr_hugepages=512 00:05:15.919 resv_hugepages=0 00:05:15.919 14:45:48 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:05:15.919 surplus_hugepages=0 00:05:15.919 14:45:48 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:05:15.919 anon_hugepages=0 00:05:15.919 14:45:48 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:05:15.919 14:45:48 -- setup/hugepages.sh@107 -- # (( 512 == nr_hugepages + surp + resv )) 00:05:15.919 14:45:48 -- setup/hugepages.sh@109 -- # (( 512 == nr_hugepages )) 00:05:15.919 14:45:48 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:05:15.919 14:45:48 -- setup/common.sh@17 -- # local get=HugePages_Total 00:05:15.919 14:45:48 -- setup/common.sh@18 -- # local node= 00:05:15.919 14:45:48 -- setup/common.sh@19 -- # local var val 00:05:15.919 14:45:48 -- setup/common.sh@20 -- # local mem_f mem 00:05:15.919 14:45:48 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:15.919 14:45:48 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:15.919 14:45:48 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:15.919 14:45:48 -- setup/common.sh@28 -- # mapfile -t mem 00:05:15.919 14:45:48 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:15.919 14:45:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.919 14:45:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.919 14:45:48 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239108 kB' 'MemFree: 7559320 kB' 'MemAvailable: 10488916 kB' 'Buffers: 2684 kB' 'Cached: 3130196 kB' 'SwapCached: 0 kB' 'Active: 497660 kB' 'Inactive: 2753240 kB' 'Active(anon): 128508 kB' 'Inactive(anon): 0 kB' 'Active(file): 369152 kB' 'Inactive(file): 2753240 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 332 kB' 'Writeback: 0 kB' 'AnonPages: 119648 kB' 'Mapped: 50764 kB' 'Shmem: 10488 kB' 'KReclaimable: 88300 kB' 'Slab: 191584 kB' 'SReclaimable: 88300 kB' 'SUnreclaim: 103284 kB' 'KernelStack: 6768 kB' 'PageTables: 4496 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13983868 kB' 'Committed_AS: 323252 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55480 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 206700 kB' 'DirectMap2M: 6084608 kB' 'DirectMap1G: 8388608 kB' 00:05:15.919 14:45:48 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:15.919 14:45:48 -- setup/common.sh@32 -- # continue 00:05:15.919 14:45:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.919 14:45:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.919 14:45:48 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:15.919 14:45:48 -- setup/common.sh@32 -- # continue 00:05:15.919 14:45:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.919 14:45:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.919 14:45:48 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:15.919 14:45:48 -- setup/common.sh@32 -- # continue 00:05:15.919 14:45:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.919 14:45:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.919 14:45:48 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:15.919 14:45:48 -- setup/common.sh@32 -- # continue 00:05:15.919 14:45:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.919 14:45:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.919 14:45:48 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:15.919 14:45:48 -- setup/common.sh@32 -- # continue 00:05:15.919 14:45:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.919 14:45:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.919 14:45:48 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:15.919 14:45:48 -- setup/common.sh@32 -- # continue 00:05:15.919 14:45:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.919 14:45:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.919 14:45:48 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:15.919 14:45:48 -- setup/common.sh@32 -- # continue 00:05:15.919 14:45:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.919 14:45:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.919 14:45:48 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:15.919 14:45:48 -- setup/common.sh@32 -- # continue 00:05:15.919 14:45:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.919 14:45:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.919 14:45:48 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:15.919 14:45:48 -- setup/common.sh@32 -- # continue 00:05:15.919 14:45:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.919 14:45:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.919 14:45:48 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:15.919 14:45:48 -- setup/common.sh@32 -- # continue 00:05:15.919 14:45:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.919 14:45:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.919 14:45:48 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:15.919 14:45:48 -- setup/common.sh@32 -- # continue 00:05:15.919 14:45:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.919 14:45:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.919 14:45:48 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:15.919 14:45:48 -- setup/common.sh@32 -- # continue 00:05:15.919 14:45:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.919 14:45:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.919 14:45:48 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:15.919 14:45:48 -- setup/common.sh@32 -- # continue 00:05:15.919 14:45:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.920 14:45:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.920 14:45:48 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:15.920 14:45:48 -- setup/common.sh@32 -- # continue 00:05:15.920 14:45:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.920 14:45:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.920 14:45:48 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:15.920 14:45:48 -- setup/common.sh@32 -- # continue 00:05:15.920 14:45:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.920 14:45:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.920 14:45:48 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:15.920 14:45:48 -- setup/common.sh@32 -- # continue 00:05:15.920 14:45:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.920 14:45:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.920 14:45:48 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:15.920 14:45:48 -- setup/common.sh@32 -- # continue 00:05:15.920 14:45:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.920 14:45:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.920 14:45:48 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:15.920 14:45:48 -- setup/common.sh@32 -- # continue 00:05:15.920 14:45:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.920 14:45:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.920 14:45:48 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:15.920 14:45:48 -- setup/common.sh@32 -- # continue 00:05:15.920 14:45:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.920 14:45:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.920 14:45:48 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:15.920 14:45:48 -- setup/common.sh@32 -- # continue 00:05:15.920 14:45:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.920 14:45:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.920 14:45:48 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:15.920 14:45:48 -- setup/common.sh@32 -- # continue 00:05:15.920 14:45:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.920 14:45:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.920 14:45:48 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:15.920 14:45:48 -- setup/common.sh@32 -- # continue 00:05:15.920 14:45:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.920 14:45:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.920 14:45:48 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:15.920 14:45:48 -- setup/common.sh@32 -- # continue 00:05:15.920 14:45:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.920 14:45:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.920 14:45:48 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:15.920 14:45:48 -- setup/common.sh@32 -- # continue 00:05:15.920 14:45:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.920 14:45:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.920 14:45:48 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:15.920 14:45:48 -- setup/common.sh@32 -- # continue 00:05:15.920 14:45:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.920 14:45:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.920 14:45:48 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:15.920 14:45:48 -- setup/common.sh@32 -- # continue 00:05:15.920 14:45:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.920 14:45:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.920 14:45:48 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:15.920 14:45:48 -- setup/common.sh@32 -- # continue 00:05:15.920 14:45:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.920 14:45:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.920 14:45:48 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:15.920 14:45:48 -- setup/common.sh@32 -- # continue 00:05:15.920 14:45:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.920 14:45:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.920 14:45:48 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:15.920 14:45:48 -- setup/common.sh@32 -- # continue 00:05:15.920 14:45:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.920 14:45:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.920 14:45:48 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:15.920 14:45:48 -- setup/common.sh@32 -- # continue 00:05:15.920 14:45:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.920 14:45:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.920 14:45:48 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:15.920 14:45:48 -- setup/common.sh@32 -- # continue 00:05:15.920 14:45:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.920 14:45:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.920 14:45:48 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:15.920 14:45:48 -- setup/common.sh@32 -- # continue 00:05:15.920 14:45:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.920 14:45:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.920 14:45:48 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:15.920 14:45:48 -- setup/common.sh@32 -- # continue 00:05:15.920 14:45:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.920 14:45:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.920 14:45:48 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:15.920 14:45:48 -- setup/common.sh@32 -- # continue 00:05:15.920 14:45:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.920 14:45:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.920 14:45:48 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:15.920 14:45:48 -- setup/common.sh@32 -- # continue 00:05:15.920 14:45:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.920 14:45:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.920 14:45:48 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:15.920 14:45:48 -- setup/common.sh@32 -- # continue 00:05:15.920 14:45:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.920 14:45:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.920 14:45:48 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:15.920 14:45:48 -- setup/common.sh@32 -- # continue 00:05:15.920 14:45:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.920 14:45:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.920 14:45:48 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:15.920 14:45:48 -- setup/common.sh@32 -- # continue 00:05:15.920 14:45:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.920 14:45:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.920 14:45:48 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:15.920 14:45:48 -- setup/common.sh@32 -- # continue 00:05:15.920 14:45:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.920 14:45:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.920 14:45:48 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:15.920 14:45:48 -- setup/common.sh@32 -- # continue 00:05:15.920 14:45:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.920 14:45:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.920 14:45:48 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:15.920 14:45:48 -- setup/common.sh@32 -- # continue 00:05:15.920 14:45:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.920 14:45:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.920 14:45:48 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:15.920 14:45:48 -- setup/common.sh@32 -- # continue 00:05:15.920 14:45:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.920 14:45:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.920 14:45:48 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:15.920 14:45:48 -- setup/common.sh@32 -- # continue 00:05:15.920 14:45:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.920 14:45:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.920 14:45:48 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:15.920 14:45:48 -- setup/common.sh@32 -- # continue 00:05:15.920 14:45:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.920 14:45:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.920 14:45:48 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:15.920 14:45:48 -- setup/common.sh@32 -- # continue 00:05:15.920 14:45:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.920 14:45:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.920 14:45:48 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:15.920 14:45:48 -- setup/common.sh@32 -- # continue 00:05:15.920 14:45:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.920 14:45:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.920 14:45:48 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:15.920 14:45:48 -- setup/common.sh@32 -- # continue 00:05:15.920 14:45:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.920 14:45:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.920 14:45:48 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:15.920 14:45:48 -- setup/common.sh@32 -- # continue 00:05:15.920 14:45:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.920 14:45:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.920 14:45:48 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:15.920 14:45:48 -- setup/common.sh@33 -- # echo 512 00:05:15.920 14:45:48 -- setup/common.sh@33 -- # return 0 00:05:15.920 14:45:48 -- setup/hugepages.sh@110 -- # (( 512 == nr_hugepages + surp + resv )) 00:05:15.920 14:45:48 -- setup/hugepages.sh@112 -- # get_nodes 00:05:15.920 14:45:48 -- setup/hugepages.sh@27 -- # local node 00:05:15.920 14:45:48 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:15.920 14:45:48 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:05:15.920 14:45:48 -- setup/hugepages.sh@32 -- # no_nodes=1 00:05:15.920 14:45:48 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:05:15.920 14:45:48 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:05:15.920 14:45:48 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:05:15.921 14:45:48 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:05:15.921 14:45:48 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:15.921 14:45:48 -- setup/common.sh@18 -- # local node=0 00:05:15.921 14:45:48 -- setup/common.sh@19 -- # local var val 00:05:15.921 14:45:48 -- setup/common.sh@20 -- # local mem_f mem 00:05:15.921 14:45:48 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:15.921 14:45:48 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:05:15.921 14:45:48 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:05:15.921 14:45:48 -- setup/common.sh@28 -- # mapfile -t mem 00:05:15.921 14:45:48 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:15.921 14:45:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.921 14:45:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.921 14:45:48 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239108 kB' 'MemFree: 7559708 kB' 'MemUsed: 4679400 kB' 'SwapCached: 0 kB' 'Active: 497668 kB' 'Inactive: 2753240 kB' 'Active(anon): 128516 kB' 'Inactive(anon): 0 kB' 'Active(file): 369152 kB' 'Inactive(file): 2753240 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 332 kB' 'Writeback: 0 kB' 'FilePages: 3132880 kB' 'Mapped: 50764 kB' 'AnonPages: 119652 kB' 'Shmem: 10488 kB' 'KernelStack: 6768 kB' 'PageTables: 4496 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 88300 kB' 'Slab: 191580 kB' 'SReclaimable: 88300 kB' 'SUnreclaim: 103280 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:05:15.921 14:45:48 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.921 14:45:48 -- setup/common.sh@32 -- # continue 00:05:15.921 14:45:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.921 14:45:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.921 14:45:48 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.921 14:45:48 -- setup/common.sh@32 -- # continue 00:05:15.921 14:45:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.921 14:45:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.921 14:45:48 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.921 14:45:48 -- setup/common.sh@32 -- # continue 00:05:15.921 14:45:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.921 14:45:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.921 14:45:48 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.921 14:45:48 -- setup/common.sh@32 -- # continue 00:05:15.921 14:45:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.921 14:45:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.921 14:45:48 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.921 14:45:48 -- setup/common.sh@32 -- # continue 00:05:15.921 14:45:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.921 14:45:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.921 14:45:48 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.921 14:45:48 -- setup/common.sh@32 -- # continue 00:05:15.921 14:45:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.921 14:45:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.921 14:45:48 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.921 14:45:48 -- setup/common.sh@32 -- # continue 00:05:15.921 14:45:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.921 14:45:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.921 14:45:48 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.921 14:45:48 -- setup/common.sh@32 -- # continue 00:05:15.921 14:45:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.921 14:45:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.921 14:45:48 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.921 14:45:48 -- setup/common.sh@32 -- # continue 00:05:15.921 14:45:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.921 14:45:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.921 14:45:48 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.921 14:45:48 -- setup/common.sh@32 -- # continue 00:05:15.921 14:45:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.921 14:45:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.921 14:45:48 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.921 14:45:48 -- setup/common.sh@32 -- # continue 00:05:15.921 14:45:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.921 14:45:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.921 14:45:48 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.921 14:45:48 -- setup/common.sh@32 -- # continue 00:05:15.921 14:45:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.921 14:45:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.921 14:45:48 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.921 14:45:48 -- setup/common.sh@32 -- # continue 00:05:15.921 14:45:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.921 14:45:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.921 14:45:48 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.921 14:45:48 -- setup/common.sh@32 -- # continue 00:05:15.921 14:45:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.921 14:45:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.921 14:45:48 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.921 14:45:48 -- setup/common.sh@32 -- # continue 00:05:15.921 14:45:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.921 14:45:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.921 14:45:48 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.921 14:45:48 -- setup/common.sh@32 -- # continue 00:05:15.921 14:45:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.921 14:45:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.921 14:45:48 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.921 14:45:48 -- setup/common.sh@32 -- # continue 00:05:15.921 14:45:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.921 14:45:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.921 14:45:48 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.921 14:45:48 -- setup/common.sh@32 -- # continue 00:05:15.921 14:45:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.921 14:45:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.921 14:45:48 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.921 14:45:48 -- setup/common.sh@32 -- # continue 00:05:15.921 14:45:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.921 14:45:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.921 14:45:48 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.921 14:45:48 -- setup/common.sh@32 -- # continue 00:05:15.921 14:45:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.921 14:45:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.921 14:45:48 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.921 14:45:48 -- setup/common.sh@32 -- # continue 00:05:15.921 14:45:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.921 14:45:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.921 14:45:48 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.921 14:45:48 -- setup/common.sh@32 -- # continue 00:05:15.921 14:45:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.921 14:45:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.921 14:45:48 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.921 14:45:48 -- setup/common.sh@32 -- # continue 00:05:15.921 14:45:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.921 14:45:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.921 14:45:48 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.921 14:45:48 -- setup/common.sh@32 -- # continue 00:05:15.921 14:45:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.921 14:45:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.921 14:45:48 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.921 14:45:48 -- setup/common.sh@32 -- # continue 00:05:15.921 14:45:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.921 14:45:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.921 14:45:48 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.921 14:45:48 -- setup/common.sh@32 -- # continue 00:05:15.921 14:45:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.921 14:45:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.921 14:45:48 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.921 14:45:48 -- setup/common.sh@32 -- # continue 00:05:15.921 14:45:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.921 14:45:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.921 14:45:48 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.921 14:45:48 -- setup/common.sh@32 -- # continue 00:05:15.921 14:45:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.921 14:45:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.921 14:45:48 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.921 14:45:48 -- setup/common.sh@32 -- # continue 00:05:15.921 14:45:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.921 14:45:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.921 14:45:48 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.921 14:45:48 -- setup/common.sh@32 -- # continue 00:05:15.921 14:45:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.921 14:45:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.921 14:45:48 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.921 14:45:48 -- setup/common.sh@32 -- # continue 00:05:15.921 14:45:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.921 14:45:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.921 14:45:48 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.921 14:45:48 -- setup/common.sh@32 -- # continue 00:05:15.921 14:45:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.921 14:45:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.921 14:45:48 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.921 14:45:48 -- setup/common.sh@32 -- # continue 00:05:15.921 14:45:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.921 14:45:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.921 14:45:48 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.921 14:45:48 -- setup/common.sh@32 -- # continue 00:05:15.921 14:45:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.921 14:45:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.921 14:45:48 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.921 14:45:48 -- setup/common.sh@32 -- # continue 00:05:15.921 14:45:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.921 14:45:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.922 14:45:48 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.922 14:45:48 -- setup/common.sh@32 -- # continue 00:05:15.922 14:45:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.922 14:45:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.922 14:45:48 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.922 14:45:48 -- setup/common.sh@33 -- # echo 0 00:05:15.922 14:45:48 -- setup/common.sh@33 -- # return 0 00:05:15.922 14:45:48 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:05:15.922 14:45:48 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:05:15.922 14:45:48 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:05:15.922 14:45:48 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:05:15.922 node0=512 expecting 512 00:05:15.922 14:45:48 -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:05:15.922 14:45:48 -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:05:15.922 00:05:15.922 real 0m0.609s 00:05:15.922 user 0m0.307s 00:05:15.922 sys 0m0.339s 00:05:15.922 14:45:48 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:15.922 14:45:48 -- common/autotest_common.sh@10 -- # set +x 00:05:15.922 ************************************ 00:05:15.922 END TEST per_node_1G_alloc 00:05:15.922 ************************************ 00:05:15.922 14:45:48 -- setup/hugepages.sh@212 -- # run_test even_2G_alloc even_2G_alloc 00:05:15.922 14:45:48 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:15.922 14:45:48 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:15.922 14:45:48 -- common/autotest_common.sh@10 -- # set +x 00:05:15.922 ************************************ 00:05:15.922 START TEST even_2G_alloc 00:05:15.922 ************************************ 00:05:15.922 14:45:48 -- common/autotest_common.sh@1114 -- # even_2G_alloc 00:05:15.922 14:45:48 -- setup/hugepages.sh@152 -- # get_test_nr_hugepages 2097152 00:05:15.922 14:45:48 -- setup/hugepages.sh@49 -- # local size=2097152 00:05:15.922 14:45:48 -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:05:15.922 14:45:48 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:05:15.922 14:45:48 -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:05:15.922 14:45:48 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:05:15.922 14:45:48 -- setup/hugepages.sh@62 -- # user_nodes=() 00:05:15.922 14:45:48 -- setup/hugepages.sh@62 -- # local user_nodes 00:05:15.922 14:45:48 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:05:15.922 14:45:48 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:05:15.922 14:45:48 -- setup/hugepages.sh@67 -- # nodes_test=() 00:05:15.922 14:45:48 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:05:15.922 14:45:48 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:05:15.922 14:45:48 -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:05:15.922 14:45:48 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:05:15.922 14:45:48 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=1024 00:05:15.922 14:45:48 -- setup/hugepages.sh@83 -- # : 0 00:05:15.922 14:45:48 -- setup/hugepages.sh@84 -- # : 0 00:05:15.922 14:45:48 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:05:15.922 14:45:48 -- setup/hugepages.sh@153 -- # NRHUGE=1024 00:05:15.922 14:45:48 -- setup/hugepages.sh@153 -- # HUGE_EVEN_ALLOC=yes 00:05:15.922 14:45:48 -- setup/hugepages.sh@153 -- # setup output 00:05:15.922 14:45:48 -- setup/common.sh@9 -- # [[ output == output ]] 00:05:15.922 14:45:48 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:16.184 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:16.455 0000:00:06.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:16.455 0000:00:07.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:16.455 14:45:49 -- setup/hugepages.sh@154 -- # verify_nr_hugepages 00:05:16.455 14:45:49 -- setup/hugepages.sh@89 -- # local node 00:05:16.455 14:45:49 -- setup/hugepages.sh@90 -- # local sorted_t 00:05:16.456 14:45:49 -- setup/hugepages.sh@91 -- # local sorted_s 00:05:16.456 14:45:49 -- setup/hugepages.sh@92 -- # local surp 00:05:16.456 14:45:49 -- setup/hugepages.sh@93 -- # local resv 00:05:16.456 14:45:49 -- setup/hugepages.sh@94 -- # local anon 00:05:16.456 14:45:49 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:05:16.456 14:45:49 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:05:16.456 14:45:49 -- setup/common.sh@17 -- # local get=AnonHugePages 00:05:16.456 14:45:49 -- setup/common.sh@18 -- # local node= 00:05:16.456 14:45:49 -- setup/common.sh@19 -- # local var val 00:05:16.456 14:45:49 -- setup/common.sh@20 -- # local mem_f mem 00:05:16.456 14:45:49 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:16.456 14:45:49 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:16.456 14:45:49 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:16.456 14:45:49 -- setup/common.sh@28 -- # mapfile -t mem 00:05:16.456 14:45:49 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:16.456 14:45:49 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.456 14:45:49 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.456 14:45:49 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239108 kB' 'MemFree: 6511652 kB' 'MemAvailable: 9441248 kB' 'Buffers: 2684 kB' 'Cached: 3130196 kB' 'SwapCached: 0 kB' 'Active: 497984 kB' 'Inactive: 2753240 kB' 'Active(anon): 128832 kB' 'Inactive(anon): 0 kB' 'Active(file): 369152 kB' 'Inactive(file): 2753240 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 332 kB' 'Writeback: 0 kB' 'AnonPages: 119748 kB' 'Mapped: 50876 kB' 'Shmem: 10488 kB' 'KReclaimable: 88300 kB' 'Slab: 191576 kB' 'SReclaimable: 88300 kB' 'SUnreclaim: 103276 kB' 'KernelStack: 6792 kB' 'PageTables: 4432 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459580 kB' 'Committed_AS: 323252 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55496 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 206700 kB' 'DirectMap2M: 6084608 kB' 'DirectMap1G: 8388608 kB' 00:05:16.456 14:45:49 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:16.456 14:45:49 -- setup/common.sh@32 -- # continue 00:05:16.456 14:45:49 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.456 14:45:49 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.456 14:45:49 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:16.456 14:45:49 -- setup/common.sh@32 -- # continue 00:05:16.456 14:45:49 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.456 14:45:49 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.456 14:45:49 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:16.456 14:45:49 -- setup/common.sh@32 -- # continue 00:05:16.456 14:45:49 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.456 14:45:49 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.456 14:45:49 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:16.456 14:45:49 -- setup/common.sh@32 -- # continue 00:05:16.456 14:45:49 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.456 14:45:49 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.456 14:45:49 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:16.456 14:45:49 -- setup/common.sh@32 -- # continue 00:05:16.456 14:45:49 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.456 14:45:49 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.456 14:45:49 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:16.456 14:45:49 -- setup/common.sh@32 -- # continue 00:05:16.456 14:45:49 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.456 14:45:49 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.456 14:45:49 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:16.456 14:45:49 -- setup/common.sh@32 -- # continue 00:05:16.456 14:45:49 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.456 14:45:49 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.456 14:45:49 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:16.456 14:45:49 -- setup/common.sh@32 -- # continue 00:05:16.456 14:45:49 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.456 14:45:49 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.456 14:45:49 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:16.456 14:45:49 -- setup/common.sh@32 -- # continue 00:05:16.456 14:45:49 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.456 14:45:49 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.456 14:45:49 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:16.456 14:45:49 -- setup/common.sh@32 -- # continue 00:05:16.456 14:45:49 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.456 14:45:49 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.456 14:45:49 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:16.456 14:45:49 -- setup/common.sh@32 -- # continue 00:05:16.456 14:45:49 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.456 14:45:49 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.456 14:45:49 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:16.456 14:45:49 -- setup/common.sh@32 -- # continue 00:05:16.456 14:45:49 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.456 14:45:49 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.456 14:45:49 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:16.456 14:45:49 -- setup/common.sh@32 -- # continue 00:05:16.456 14:45:49 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.456 14:45:49 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.456 14:45:49 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:16.456 14:45:49 -- setup/common.sh@32 -- # continue 00:05:16.456 14:45:49 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.456 14:45:49 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.456 14:45:49 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:16.456 14:45:49 -- setup/common.sh@32 -- # continue 00:05:16.456 14:45:49 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.456 14:45:49 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.456 14:45:49 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:16.456 14:45:49 -- setup/common.sh@32 -- # continue 00:05:16.456 14:45:49 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.456 14:45:49 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.456 14:45:49 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:16.456 14:45:49 -- setup/common.sh@32 -- # continue 00:05:16.456 14:45:49 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.456 14:45:49 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.456 14:45:49 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:16.456 14:45:49 -- setup/common.sh@32 -- # continue 00:05:16.456 14:45:49 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.456 14:45:49 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.456 14:45:49 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:16.456 14:45:49 -- setup/common.sh@32 -- # continue 00:05:16.456 14:45:49 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.456 14:45:49 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.456 14:45:49 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:16.456 14:45:49 -- setup/common.sh@32 -- # continue 00:05:16.456 14:45:49 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.456 14:45:49 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.456 14:45:49 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:16.456 14:45:49 -- setup/common.sh@32 -- # continue 00:05:16.456 14:45:49 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.456 14:45:49 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.456 14:45:49 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:16.456 14:45:49 -- setup/common.sh@32 -- # continue 00:05:16.456 14:45:49 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.456 14:45:49 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.456 14:45:49 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:16.456 14:45:49 -- setup/common.sh@32 -- # continue 00:05:16.456 14:45:49 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.456 14:45:49 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.456 14:45:49 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:16.456 14:45:49 -- setup/common.sh@32 -- # continue 00:05:16.456 14:45:49 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.456 14:45:49 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.456 14:45:49 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:16.456 14:45:49 -- setup/common.sh@32 -- # continue 00:05:16.456 14:45:49 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.456 14:45:49 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.456 14:45:49 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:16.456 14:45:49 -- setup/common.sh@32 -- # continue 00:05:16.457 14:45:49 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.457 14:45:49 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.457 14:45:49 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:16.457 14:45:49 -- setup/common.sh@32 -- # continue 00:05:16.457 14:45:49 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.457 14:45:49 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.457 14:45:49 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:16.457 14:45:49 -- setup/common.sh@32 -- # continue 00:05:16.457 14:45:49 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.457 14:45:49 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.457 14:45:49 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:16.457 14:45:49 -- setup/common.sh@32 -- # continue 00:05:16.457 14:45:49 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.457 14:45:49 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.457 14:45:49 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:16.457 14:45:49 -- setup/common.sh@32 -- # continue 00:05:16.457 14:45:49 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.457 14:45:49 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.457 14:45:49 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:16.457 14:45:49 -- setup/common.sh@32 -- # continue 00:05:16.457 14:45:49 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.457 14:45:49 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.457 14:45:49 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:16.457 14:45:49 -- setup/common.sh@32 -- # continue 00:05:16.457 14:45:49 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.457 14:45:49 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.457 14:45:49 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:16.457 14:45:49 -- setup/common.sh@32 -- # continue 00:05:16.457 14:45:49 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.457 14:45:49 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.457 14:45:49 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:16.457 14:45:49 -- setup/common.sh@32 -- # continue 00:05:16.457 14:45:49 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.457 14:45:49 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.457 14:45:49 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:16.457 14:45:49 -- setup/common.sh@32 -- # continue 00:05:16.457 14:45:49 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.457 14:45:49 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.457 14:45:49 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:16.457 14:45:49 -- setup/common.sh@32 -- # continue 00:05:16.457 14:45:49 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.457 14:45:49 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.457 14:45:49 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:16.457 14:45:49 -- setup/common.sh@32 -- # continue 00:05:16.457 14:45:49 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.457 14:45:49 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.457 14:45:49 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:16.457 14:45:49 -- setup/common.sh@32 -- # continue 00:05:16.457 14:45:49 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.457 14:45:49 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.457 14:45:49 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:16.457 14:45:49 -- setup/common.sh@32 -- # continue 00:05:16.457 14:45:49 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.457 14:45:49 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.457 14:45:49 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:16.457 14:45:49 -- setup/common.sh@32 -- # continue 00:05:16.457 14:45:49 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.457 14:45:49 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.457 14:45:49 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:16.457 14:45:49 -- setup/common.sh@33 -- # echo 0 00:05:16.457 14:45:49 -- setup/common.sh@33 -- # return 0 00:05:16.457 14:45:49 -- setup/hugepages.sh@97 -- # anon=0 00:05:16.457 14:45:49 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:05:16.457 14:45:49 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:16.457 14:45:49 -- setup/common.sh@18 -- # local node= 00:05:16.457 14:45:49 -- setup/common.sh@19 -- # local var val 00:05:16.457 14:45:49 -- setup/common.sh@20 -- # local mem_f mem 00:05:16.457 14:45:49 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:16.457 14:45:49 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:16.457 14:45:49 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:16.457 14:45:49 -- setup/common.sh@28 -- # mapfile -t mem 00:05:16.457 14:45:49 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:16.457 14:45:49 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.457 14:45:49 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.457 14:45:49 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239108 kB' 'MemFree: 6512172 kB' 'MemAvailable: 9441768 kB' 'Buffers: 2684 kB' 'Cached: 3130196 kB' 'SwapCached: 0 kB' 'Active: 497408 kB' 'Inactive: 2753240 kB' 'Active(anon): 128256 kB' 'Inactive(anon): 0 kB' 'Active(file): 369152 kB' 'Inactive(file): 2753240 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 332 kB' 'Writeback: 0 kB' 'AnonPages: 119404 kB' 'Mapped: 50764 kB' 'Shmem: 10488 kB' 'KReclaimable: 88300 kB' 'Slab: 191588 kB' 'SReclaimable: 88300 kB' 'SUnreclaim: 103288 kB' 'KernelStack: 6768 kB' 'PageTables: 4496 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459580 kB' 'Committed_AS: 323252 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55496 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 206700 kB' 'DirectMap2M: 6084608 kB' 'DirectMap1G: 8388608 kB' 00:05:16.457 14:45:49 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.457 14:45:49 -- setup/common.sh@32 -- # continue 00:05:16.457 14:45:49 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.457 14:45:49 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.457 14:45:49 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.457 14:45:49 -- setup/common.sh@32 -- # continue 00:05:16.457 14:45:49 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.457 14:45:49 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.457 14:45:49 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.457 14:45:49 -- setup/common.sh@32 -- # continue 00:05:16.457 14:45:49 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.457 14:45:49 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.457 14:45:49 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.457 14:45:49 -- setup/common.sh@32 -- # continue 00:05:16.457 14:45:49 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.457 14:45:49 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.457 14:45:49 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.457 14:45:49 -- setup/common.sh@32 -- # continue 00:05:16.457 14:45:49 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.457 14:45:49 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.457 14:45:49 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.457 14:45:49 -- setup/common.sh@32 -- # continue 00:05:16.457 14:45:49 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.457 14:45:49 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.457 14:45:49 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.457 14:45:49 -- setup/common.sh@32 -- # continue 00:05:16.457 14:45:49 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.457 14:45:49 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.457 14:45:49 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.457 14:45:49 -- setup/common.sh@32 -- # continue 00:05:16.457 14:45:49 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.457 14:45:49 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.457 14:45:49 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.457 14:45:49 -- setup/common.sh@32 -- # continue 00:05:16.457 14:45:49 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.457 14:45:49 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.457 14:45:49 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.457 14:45:49 -- setup/common.sh@32 -- # continue 00:05:16.457 14:45:49 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.457 14:45:49 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.457 14:45:49 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.457 14:45:49 -- setup/common.sh@32 -- # continue 00:05:16.457 14:45:49 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.457 14:45:49 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.457 14:45:49 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.457 14:45:49 -- setup/common.sh@32 -- # continue 00:05:16.457 14:45:49 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.457 14:45:49 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.457 14:45:49 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.457 14:45:49 -- setup/common.sh@32 -- # continue 00:05:16.457 14:45:49 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.457 14:45:49 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.457 14:45:49 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.457 14:45:49 -- setup/common.sh@32 -- # continue 00:05:16.457 14:45:49 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.457 14:45:49 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.457 14:45:49 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.457 14:45:49 -- setup/common.sh@32 -- # continue 00:05:16.457 14:45:49 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.457 14:45:49 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.457 14:45:49 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.457 14:45:49 -- setup/common.sh@32 -- # continue 00:05:16.458 14:45:49 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.458 14:45:49 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.458 14:45:49 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.458 14:45:49 -- setup/common.sh@32 -- # continue 00:05:16.458 14:45:49 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.458 14:45:49 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.458 14:45:49 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.458 14:45:49 -- setup/common.sh@32 -- # continue 00:05:16.458 14:45:49 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.458 14:45:49 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.458 14:45:49 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.458 14:45:49 -- setup/common.sh@32 -- # continue 00:05:16.458 14:45:49 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.458 14:45:49 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.458 14:45:49 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.458 14:45:49 -- setup/common.sh@32 -- # continue 00:05:16.458 14:45:49 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.458 14:45:49 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.458 14:45:49 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.458 14:45:49 -- setup/common.sh@32 -- # continue 00:05:16.458 14:45:49 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.458 14:45:49 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.458 14:45:49 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.458 14:45:49 -- setup/common.sh@32 -- # continue 00:05:16.458 14:45:49 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.458 14:45:49 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.458 14:45:49 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.458 14:45:49 -- setup/common.sh@32 -- # continue 00:05:16.458 14:45:49 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.458 14:45:49 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.458 14:45:49 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.458 14:45:49 -- setup/common.sh@32 -- # continue 00:05:16.458 14:45:49 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.458 14:45:49 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.458 14:45:49 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.458 14:45:49 -- setup/common.sh@32 -- # continue 00:05:16.458 14:45:49 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.458 14:45:49 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.458 14:45:49 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.458 14:45:49 -- setup/common.sh@32 -- # continue 00:05:16.458 14:45:49 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.458 14:45:49 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.458 14:45:49 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.458 14:45:49 -- setup/common.sh@32 -- # continue 00:05:16.458 14:45:49 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.458 14:45:49 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.458 14:45:49 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.458 14:45:49 -- setup/common.sh@32 -- # continue 00:05:16.458 14:45:49 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.458 14:45:49 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.458 14:45:49 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.458 14:45:49 -- setup/common.sh@32 -- # continue 00:05:16.458 14:45:49 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.458 14:45:49 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.458 14:45:49 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.458 14:45:49 -- setup/common.sh@32 -- # continue 00:05:16.458 14:45:49 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.458 14:45:49 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.458 14:45:49 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.458 14:45:49 -- setup/common.sh@32 -- # continue 00:05:16.458 14:45:49 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.458 14:45:49 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.458 14:45:49 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.458 14:45:49 -- setup/common.sh@32 -- # continue 00:05:16.458 14:45:49 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.458 14:45:49 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.458 14:45:49 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.458 14:45:49 -- setup/common.sh@32 -- # continue 00:05:16.458 14:45:49 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.458 14:45:49 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.458 14:45:49 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.458 14:45:49 -- setup/common.sh@32 -- # continue 00:05:16.458 14:45:49 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.458 14:45:49 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.458 14:45:49 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.458 14:45:49 -- setup/common.sh@32 -- # continue 00:05:16.458 14:45:49 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.458 14:45:49 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.458 14:45:49 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.458 14:45:49 -- setup/common.sh@32 -- # continue 00:05:16.458 14:45:49 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.458 14:45:49 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.458 14:45:49 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.458 14:45:49 -- setup/common.sh@32 -- # continue 00:05:16.458 14:45:49 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.458 14:45:49 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.458 14:45:49 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.458 14:45:49 -- setup/common.sh@32 -- # continue 00:05:16.458 14:45:49 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.458 14:45:49 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.458 14:45:49 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.458 14:45:49 -- setup/common.sh@32 -- # continue 00:05:16.458 14:45:49 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.458 14:45:49 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.458 14:45:49 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.458 14:45:49 -- setup/common.sh@32 -- # continue 00:05:16.458 14:45:49 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.458 14:45:49 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.458 14:45:49 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.458 14:45:49 -- setup/common.sh@32 -- # continue 00:05:16.458 14:45:49 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.458 14:45:49 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.458 14:45:49 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.458 14:45:49 -- setup/common.sh@32 -- # continue 00:05:16.458 14:45:49 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.458 14:45:49 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.458 14:45:49 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.458 14:45:49 -- setup/common.sh@32 -- # continue 00:05:16.458 14:45:49 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.458 14:45:49 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.458 14:45:49 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.458 14:45:49 -- setup/common.sh@32 -- # continue 00:05:16.458 14:45:49 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.458 14:45:49 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.458 14:45:49 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.458 14:45:49 -- setup/common.sh@32 -- # continue 00:05:16.458 14:45:49 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.458 14:45:49 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.458 14:45:49 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.458 14:45:49 -- setup/common.sh@32 -- # continue 00:05:16.458 14:45:49 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.458 14:45:49 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.458 14:45:49 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.458 14:45:49 -- setup/common.sh@32 -- # continue 00:05:16.458 14:45:49 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.458 14:45:49 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.458 14:45:49 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.458 14:45:49 -- setup/common.sh@32 -- # continue 00:05:16.458 14:45:49 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.458 14:45:49 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.458 14:45:49 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.458 14:45:49 -- setup/common.sh@32 -- # continue 00:05:16.458 14:45:49 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.458 14:45:49 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.458 14:45:49 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.458 14:45:49 -- setup/common.sh@32 -- # continue 00:05:16.458 14:45:49 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.458 14:45:49 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.458 14:45:49 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.458 14:45:49 -- setup/common.sh@32 -- # continue 00:05:16.458 14:45:49 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.458 14:45:49 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.458 14:45:49 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.458 14:45:49 -- setup/common.sh@33 -- # echo 0 00:05:16.458 14:45:49 -- setup/common.sh@33 -- # return 0 00:05:16.458 14:45:49 -- setup/hugepages.sh@99 -- # surp=0 00:05:16.458 14:45:49 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:05:16.458 14:45:49 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:05:16.458 14:45:49 -- setup/common.sh@18 -- # local node= 00:05:16.458 14:45:49 -- setup/common.sh@19 -- # local var val 00:05:16.458 14:45:49 -- setup/common.sh@20 -- # local mem_f mem 00:05:16.458 14:45:49 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:16.458 14:45:49 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:16.458 14:45:49 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:16.458 14:45:49 -- setup/common.sh@28 -- # mapfile -t mem 00:05:16.458 14:45:49 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:16.458 14:45:49 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.458 14:45:49 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.459 14:45:49 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239108 kB' 'MemFree: 6512288 kB' 'MemAvailable: 9441884 kB' 'Buffers: 2684 kB' 'Cached: 3130196 kB' 'SwapCached: 0 kB' 'Active: 497568 kB' 'Inactive: 2753240 kB' 'Active(anon): 128416 kB' 'Inactive(anon): 0 kB' 'Active(file): 369152 kB' 'Inactive(file): 2753240 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 332 kB' 'Writeback: 0 kB' 'AnonPages: 119564 kB' 'Mapped: 50764 kB' 'Shmem: 10488 kB' 'KReclaimable: 88300 kB' 'Slab: 191584 kB' 'SReclaimable: 88300 kB' 'SUnreclaim: 103284 kB' 'KernelStack: 6736 kB' 'PageTables: 4392 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459580 kB' 'Committed_AS: 323252 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55496 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 206700 kB' 'DirectMap2M: 6084608 kB' 'DirectMap1G: 8388608 kB' 00:05:16.459 14:45:49 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:16.459 14:45:49 -- setup/common.sh@32 -- # continue 00:05:16.459 14:45:49 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.459 14:45:49 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.459 14:45:49 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:16.459 14:45:49 -- setup/common.sh@32 -- # continue 00:05:16.459 14:45:49 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.459 14:45:49 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.459 14:45:49 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:16.459 14:45:49 -- setup/common.sh@32 -- # continue 00:05:16.459 14:45:49 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.459 14:45:49 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.459 14:45:49 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:16.459 14:45:49 -- setup/common.sh@32 -- # continue 00:05:16.459 14:45:49 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.459 14:45:49 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.459 14:45:49 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:16.459 14:45:49 -- setup/common.sh@32 -- # continue 00:05:16.459 14:45:49 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.459 14:45:49 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.459 14:45:49 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:16.459 14:45:49 -- setup/common.sh@32 -- # continue 00:05:16.459 14:45:49 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.459 14:45:49 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.459 14:45:49 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:16.459 14:45:49 -- setup/common.sh@32 -- # continue 00:05:16.459 14:45:49 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.459 14:45:49 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.459 14:45:49 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:16.459 14:45:49 -- setup/common.sh@32 -- # continue 00:05:16.459 14:45:49 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.459 14:45:49 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.459 14:45:49 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:16.459 14:45:49 -- setup/common.sh@32 -- # continue 00:05:16.459 14:45:49 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.459 14:45:49 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.459 14:45:49 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:16.459 14:45:49 -- setup/common.sh@32 -- # continue 00:05:16.459 14:45:49 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.459 14:45:49 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.459 14:45:49 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:16.459 14:45:49 -- setup/common.sh@32 -- # continue 00:05:16.459 14:45:49 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.459 14:45:49 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.459 14:45:49 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:16.459 14:45:49 -- setup/common.sh@32 -- # continue 00:05:16.459 14:45:49 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.459 14:45:49 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.459 14:45:49 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:16.459 14:45:49 -- setup/common.sh@32 -- # continue 00:05:16.459 14:45:49 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.459 14:45:49 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.459 14:45:49 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:16.459 14:45:49 -- setup/common.sh@32 -- # continue 00:05:16.459 14:45:49 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.459 14:45:49 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.459 14:45:49 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:16.459 14:45:49 -- setup/common.sh@32 -- # continue 00:05:16.459 14:45:49 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.459 14:45:49 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.459 14:45:49 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:16.459 14:45:49 -- setup/common.sh@32 -- # continue 00:05:16.459 14:45:49 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.459 14:45:49 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.459 14:45:49 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:16.459 14:45:49 -- setup/common.sh@32 -- # continue 00:05:16.459 14:45:49 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.459 14:45:49 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.459 14:45:49 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:16.459 14:45:49 -- setup/common.sh@32 -- # continue 00:05:16.459 14:45:49 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.459 14:45:49 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.459 14:45:49 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:16.459 14:45:49 -- setup/common.sh@32 -- # continue 00:05:16.459 14:45:49 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.459 14:45:49 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.459 14:45:49 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:16.459 14:45:49 -- setup/common.sh@32 -- # continue 00:05:16.459 14:45:49 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.459 14:45:49 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.459 14:45:49 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:16.459 14:45:49 -- setup/common.sh@32 -- # continue 00:05:16.459 14:45:49 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.459 14:45:49 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.459 14:45:49 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:16.459 14:45:49 -- setup/common.sh@32 -- # continue 00:05:16.459 14:45:49 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.459 14:45:49 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.459 14:45:49 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:16.459 14:45:49 -- setup/common.sh@32 -- # continue 00:05:16.459 14:45:49 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.459 14:45:49 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.459 14:45:49 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:16.459 14:45:49 -- setup/common.sh@32 -- # continue 00:05:16.459 14:45:49 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.459 14:45:49 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.459 14:45:49 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:16.459 14:45:49 -- setup/common.sh@32 -- # continue 00:05:16.459 14:45:49 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.459 14:45:49 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.459 14:45:49 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:16.459 14:45:49 -- setup/common.sh@32 -- # continue 00:05:16.459 14:45:49 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.459 14:45:49 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.459 14:45:49 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:16.459 14:45:49 -- setup/common.sh@32 -- # continue 00:05:16.459 14:45:49 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.459 14:45:49 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.459 14:45:49 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:16.459 14:45:49 -- setup/common.sh@32 -- # continue 00:05:16.459 14:45:49 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.459 14:45:49 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.459 14:45:49 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:16.459 14:45:49 -- setup/common.sh@32 -- # continue 00:05:16.459 14:45:49 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.459 14:45:49 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.459 14:45:49 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:16.459 14:45:49 -- setup/common.sh@32 -- # continue 00:05:16.459 14:45:49 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.459 14:45:49 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.459 14:45:49 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:16.459 14:45:49 -- setup/common.sh@32 -- # continue 00:05:16.459 14:45:49 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.459 14:45:49 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.459 14:45:49 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:16.459 14:45:49 -- setup/common.sh@32 -- # continue 00:05:16.459 14:45:49 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.459 14:45:49 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.459 14:45:49 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:16.459 14:45:49 -- setup/common.sh@32 -- # continue 00:05:16.459 14:45:49 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.459 14:45:49 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.459 14:45:49 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:16.459 14:45:49 -- setup/common.sh@32 -- # continue 00:05:16.459 14:45:49 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.459 14:45:49 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.459 14:45:49 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:16.459 14:45:49 -- setup/common.sh@32 -- # continue 00:05:16.459 14:45:49 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.459 14:45:49 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.459 14:45:49 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:16.459 14:45:49 -- setup/common.sh@32 -- # continue 00:05:16.459 14:45:49 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.459 14:45:49 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.459 14:45:49 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:16.459 14:45:49 -- setup/common.sh@32 -- # continue 00:05:16.459 14:45:49 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.459 14:45:49 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.459 14:45:49 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:16.459 14:45:49 -- setup/common.sh@32 -- # continue 00:05:16.459 14:45:49 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.460 14:45:49 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.460 14:45:49 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:16.460 14:45:49 -- setup/common.sh@32 -- # continue 00:05:16.460 14:45:49 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.460 14:45:49 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.460 14:45:49 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:16.460 14:45:49 -- setup/common.sh@32 -- # continue 00:05:16.460 14:45:49 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.460 14:45:49 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.460 14:45:49 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:16.460 14:45:49 -- setup/common.sh@32 -- # continue 00:05:16.460 14:45:49 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.460 14:45:49 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.460 14:45:49 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:16.460 14:45:49 -- setup/common.sh@32 -- # continue 00:05:16.460 14:45:49 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.460 14:45:49 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.460 14:45:49 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:16.460 14:45:49 -- setup/common.sh@32 -- # continue 00:05:16.460 14:45:49 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.460 14:45:49 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.460 14:45:49 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:16.460 14:45:49 -- setup/common.sh@32 -- # continue 00:05:16.460 14:45:49 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.460 14:45:49 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.460 14:45:49 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:16.460 14:45:49 -- setup/common.sh@32 -- # continue 00:05:16.460 14:45:49 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.460 14:45:49 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.460 14:45:49 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:16.460 14:45:49 -- setup/common.sh@32 -- # continue 00:05:16.460 14:45:49 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.460 14:45:49 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.460 14:45:49 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:16.460 14:45:49 -- setup/common.sh@32 -- # continue 00:05:16.460 14:45:49 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.460 14:45:49 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.460 14:45:49 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:16.460 14:45:49 -- setup/common.sh@32 -- # continue 00:05:16.460 14:45:49 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.460 14:45:49 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.460 14:45:49 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:16.460 14:45:49 -- setup/common.sh@32 -- # continue 00:05:16.460 14:45:49 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.460 14:45:49 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.460 14:45:49 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:16.460 14:45:49 -- setup/common.sh@32 -- # continue 00:05:16.460 14:45:49 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.460 14:45:49 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.460 14:45:49 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:16.460 14:45:49 -- setup/common.sh@33 -- # echo 0 00:05:16.460 14:45:49 -- setup/common.sh@33 -- # return 0 00:05:16.460 14:45:49 -- setup/hugepages.sh@100 -- # resv=0 00:05:16.460 nr_hugepages=1024 00:05:16.460 14:45:49 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:05:16.460 resv_hugepages=0 00:05:16.460 14:45:49 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:05:16.460 surplus_hugepages=0 00:05:16.460 14:45:49 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:05:16.460 anon_hugepages=0 00:05:16.460 14:45:49 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:05:16.460 14:45:49 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:16.460 14:45:49 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:05:16.460 14:45:49 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:05:16.460 14:45:49 -- setup/common.sh@17 -- # local get=HugePages_Total 00:05:16.460 14:45:49 -- setup/common.sh@18 -- # local node= 00:05:16.460 14:45:49 -- setup/common.sh@19 -- # local var val 00:05:16.460 14:45:49 -- setup/common.sh@20 -- # local mem_f mem 00:05:16.460 14:45:49 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:16.460 14:45:49 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:16.460 14:45:49 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:16.460 14:45:49 -- setup/common.sh@28 -- # mapfile -t mem 00:05:16.460 14:45:49 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:16.460 14:45:49 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.460 14:45:49 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.460 14:45:49 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239108 kB' 'MemFree: 6512288 kB' 'MemAvailable: 9441884 kB' 'Buffers: 2684 kB' 'Cached: 3130196 kB' 'SwapCached: 0 kB' 'Active: 497400 kB' 'Inactive: 2753240 kB' 'Active(anon): 128248 kB' 'Inactive(anon): 0 kB' 'Active(file): 369152 kB' 'Inactive(file): 2753240 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 332 kB' 'Writeback: 0 kB' 'AnonPages: 119392 kB' 'Mapped: 50764 kB' 'Shmem: 10488 kB' 'KReclaimable: 88300 kB' 'Slab: 191580 kB' 'SReclaimable: 88300 kB' 'SUnreclaim: 103280 kB' 'KernelStack: 6768 kB' 'PageTables: 4496 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459580 kB' 'Committed_AS: 323252 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55496 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 206700 kB' 'DirectMap2M: 6084608 kB' 'DirectMap1G: 8388608 kB' 00:05:16.460 14:45:49 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:16.460 14:45:49 -- setup/common.sh@32 -- # continue 00:05:16.460 14:45:49 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.460 14:45:49 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.460 14:45:49 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:16.460 14:45:49 -- setup/common.sh@32 -- # continue 00:05:16.460 14:45:49 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.460 14:45:49 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.460 14:45:49 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:16.460 14:45:49 -- setup/common.sh@32 -- # continue 00:05:16.460 14:45:49 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.460 14:45:49 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.460 14:45:49 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:16.460 14:45:49 -- setup/common.sh@32 -- # continue 00:05:16.460 14:45:49 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.460 14:45:49 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.460 14:45:49 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:16.460 14:45:49 -- setup/common.sh@32 -- # continue 00:05:16.460 14:45:49 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.460 14:45:49 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.460 14:45:49 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:16.460 14:45:49 -- setup/common.sh@32 -- # continue 00:05:16.460 14:45:49 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.460 14:45:49 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.460 14:45:49 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:16.460 14:45:49 -- setup/common.sh@32 -- # continue 00:05:16.460 14:45:49 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.460 14:45:49 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.460 14:45:49 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:16.460 14:45:49 -- setup/common.sh@32 -- # continue 00:05:16.460 14:45:49 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.460 14:45:49 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.460 14:45:49 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:16.460 14:45:49 -- setup/common.sh@32 -- # continue 00:05:16.460 14:45:49 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.460 14:45:49 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.461 14:45:49 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:16.461 14:45:49 -- setup/common.sh@32 -- # continue 00:05:16.461 14:45:49 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.461 14:45:49 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.461 14:45:49 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:16.461 14:45:49 -- setup/common.sh@32 -- # continue 00:05:16.461 14:45:49 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.461 14:45:49 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.461 14:45:49 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:16.461 14:45:49 -- setup/common.sh@32 -- # continue 00:05:16.461 14:45:49 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.461 14:45:49 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.461 14:45:49 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:16.461 14:45:49 -- setup/common.sh@32 -- # continue 00:05:16.461 14:45:49 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.461 14:45:49 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.461 14:45:49 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:16.461 14:45:49 -- setup/common.sh@32 -- # continue 00:05:16.461 14:45:49 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.461 14:45:49 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.461 14:45:49 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:16.461 14:45:49 -- setup/common.sh@32 -- # continue 00:05:16.461 14:45:49 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.461 14:45:49 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.461 14:45:49 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:16.461 14:45:49 -- setup/common.sh@32 -- # continue 00:05:16.461 14:45:49 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.461 14:45:49 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.461 14:45:49 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:16.461 14:45:49 -- setup/common.sh@32 -- # continue 00:05:16.461 14:45:49 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.461 14:45:49 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.461 14:45:49 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:16.461 14:45:49 -- setup/common.sh@32 -- # continue 00:05:16.461 14:45:49 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.461 14:45:49 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.461 14:45:49 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:16.461 14:45:49 -- setup/common.sh@32 -- # continue 00:05:16.461 14:45:49 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.461 14:45:49 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.461 14:45:49 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:16.461 14:45:49 -- setup/common.sh@32 -- # continue 00:05:16.461 14:45:49 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.461 14:45:49 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.461 14:45:49 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:16.461 14:45:49 -- setup/common.sh@32 -- # continue 00:05:16.461 14:45:49 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.461 14:45:49 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.461 14:45:49 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:16.461 14:45:49 -- setup/common.sh@32 -- # continue 00:05:16.461 14:45:49 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.461 14:45:49 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.461 14:45:49 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:16.461 14:45:49 -- setup/common.sh@32 -- # continue 00:05:16.461 14:45:49 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.461 14:45:49 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.461 14:45:49 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:16.461 14:45:49 -- setup/common.sh@32 -- # continue 00:05:16.461 14:45:49 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.461 14:45:49 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.461 14:45:49 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:16.461 14:45:49 -- setup/common.sh@32 -- # continue 00:05:16.461 14:45:49 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.461 14:45:49 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.461 14:45:49 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:16.461 14:45:49 -- setup/common.sh@32 -- # continue 00:05:16.461 14:45:49 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.461 14:45:49 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.461 14:45:49 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:16.461 14:45:49 -- setup/common.sh@32 -- # continue 00:05:16.461 14:45:49 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.461 14:45:49 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.461 14:45:49 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:16.461 14:45:49 -- setup/common.sh@32 -- # continue 00:05:16.461 14:45:49 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.461 14:45:49 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.461 14:45:49 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:16.461 14:45:49 -- setup/common.sh@32 -- # continue 00:05:16.461 14:45:49 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.461 14:45:49 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.461 14:45:49 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:16.461 14:45:49 -- setup/common.sh@32 -- # continue 00:05:16.461 14:45:49 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.461 14:45:49 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.461 14:45:49 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:16.461 14:45:49 -- setup/common.sh@32 -- # continue 00:05:16.461 14:45:49 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.461 14:45:49 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.461 14:45:49 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:16.461 14:45:49 -- setup/common.sh@32 -- # continue 00:05:16.461 14:45:49 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.461 14:45:49 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.461 14:45:49 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:16.461 14:45:49 -- setup/common.sh@32 -- # continue 00:05:16.461 14:45:49 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.461 14:45:49 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.461 14:45:49 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:16.461 14:45:49 -- setup/common.sh@32 -- # continue 00:05:16.461 14:45:49 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.461 14:45:49 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.461 14:45:49 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:16.461 14:45:49 -- setup/common.sh@32 -- # continue 00:05:16.461 14:45:49 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.461 14:45:49 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.461 14:45:49 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:16.461 14:45:49 -- setup/common.sh@32 -- # continue 00:05:16.461 14:45:49 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.461 14:45:49 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.462 14:45:49 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:16.462 14:45:49 -- setup/common.sh@32 -- # continue 00:05:16.462 14:45:49 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.462 14:45:49 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.462 14:45:49 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:16.462 14:45:49 -- setup/common.sh@32 -- # continue 00:05:16.462 14:45:49 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.462 14:45:49 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.462 14:45:49 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:16.462 14:45:49 -- setup/common.sh@32 -- # continue 00:05:16.462 14:45:49 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.462 14:45:49 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.462 14:45:49 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:16.462 14:45:49 -- setup/common.sh@32 -- # continue 00:05:16.462 14:45:49 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.462 14:45:49 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.462 14:45:49 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:16.462 14:45:49 -- setup/common.sh@32 -- # continue 00:05:16.462 14:45:49 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.462 14:45:49 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.462 14:45:49 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:16.462 14:45:49 -- setup/common.sh@32 -- # continue 00:05:16.462 14:45:49 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.462 14:45:49 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.462 14:45:49 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:16.462 14:45:49 -- setup/common.sh@32 -- # continue 00:05:16.462 14:45:49 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.462 14:45:49 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.462 14:45:49 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:16.462 14:45:49 -- setup/common.sh@32 -- # continue 00:05:16.462 14:45:49 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.462 14:45:49 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.462 14:45:49 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:16.462 14:45:49 -- setup/common.sh@32 -- # continue 00:05:16.462 14:45:49 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.462 14:45:49 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.462 14:45:49 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:16.462 14:45:49 -- setup/common.sh@32 -- # continue 00:05:16.462 14:45:49 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.462 14:45:49 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.462 14:45:49 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:16.462 14:45:49 -- setup/common.sh@32 -- # continue 00:05:16.462 14:45:49 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.462 14:45:49 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.462 14:45:49 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:16.462 14:45:49 -- setup/common.sh@32 -- # continue 00:05:16.462 14:45:49 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.462 14:45:49 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.462 14:45:49 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:16.462 14:45:49 -- setup/common.sh@33 -- # echo 1024 00:05:16.462 14:45:49 -- setup/common.sh@33 -- # return 0 00:05:16.462 14:45:49 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:16.462 14:45:49 -- setup/hugepages.sh@112 -- # get_nodes 00:05:16.462 14:45:49 -- setup/hugepages.sh@27 -- # local node 00:05:16.462 14:45:49 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:16.462 14:45:49 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:05:16.462 14:45:49 -- setup/hugepages.sh@32 -- # no_nodes=1 00:05:16.462 14:45:49 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:05:16.462 14:45:49 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:05:16.462 14:45:49 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:05:16.462 14:45:49 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:05:16.462 14:45:49 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:16.462 14:45:49 -- setup/common.sh@18 -- # local node=0 00:05:16.462 14:45:49 -- setup/common.sh@19 -- # local var val 00:05:16.462 14:45:49 -- setup/common.sh@20 -- # local mem_f mem 00:05:16.462 14:45:49 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:16.462 14:45:49 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:05:16.462 14:45:49 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:05:16.462 14:45:49 -- setup/common.sh@28 -- # mapfile -t mem 00:05:16.462 14:45:49 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:16.462 14:45:49 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.462 14:45:49 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.462 14:45:49 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239108 kB' 'MemFree: 6512288 kB' 'MemUsed: 5726820 kB' 'SwapCached: 0 kB' 'Active: 497400 kB' 'Inactive: 2753240 kB' 'Active(anon): 128248 kB' 'Inactive(anon): 0 kB' 'Active(file): 369152 kB' 'Inactive(file): 2753240 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 332 kB' 'Writeback: 0 kB' 'FilePages: 3132880 kB' 'Mapped: 50764 kB' 'AnonPages: 119392 kB' 'Shmem: 10488 kB' 'KernelStack: 6768 kB' 'PageTables: 4496 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 88300 kB' 'Slab: 191580 kB' 'SReclaimable: 88300 kB' 'SUnreclaim: 103280 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:05:16.462 14:45:49 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.462 14:45:49 -- setup/common.sh@32 -- # continue 00:05:16.462 14:45:49 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.462 14:45:49 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.462 14:45:49 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.462 14:45:49 -- setup/common.sh@32 -- # continue 00:05:16.462 14:45:49 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.462 14:45:49 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.462 14:45:49 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.462 14:45:49 -- setup/common.sh@32 -- # continue 00:05:16.462 14:45:49 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.462 14:45:49 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.462 14:45:49 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.462 14:45:49 -- setup/common.sh@32 -- # continue 00:05:16.462 14:45:49 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.462 14:45:49 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.462 14:45:49 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.462 14:45:49 -- setup/common.sh@32 -- # continue 00:05:16.462 14:45:49 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.462 14:45:49 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.462 14:45:49 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.462 14:45:49 -- setup/common.sh@32 -- # continue 00:05:16.462 14:45:49 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.462 14:45:49 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.462 14:45:49 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.462 14:45:49 -- setup/common.sh@32 -- # continue 00:05:16.462 14:45:49 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.462 14:45:49 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.462 14:45:49 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.462 14:45:49 -- setup/common.sh@32 -- # continue 00:05:16.462 14:45:49 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.462 14:45:49 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.462 14:45:49 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.462 14:45:49 -- setup/common.sh@32 -- # continue 00:05:16.462 14:45:49 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.462 14:45:49 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.462 14:45:49 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.462 14:45:49 -- setup/common.sh@32 -- # continue 00:05:16.462 14:45:49 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.462 14:45:49 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.463 14:45:49 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.463 14:45:49 -- setup/common.sh@32 -- # continue 00:05:16.463 14:45:49 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.463 14:45:49 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.463 14:45:49 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.463 14:45:49 -- setup/common.sh@32 -- # continue 00:05:16.463 14:45:49 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.463 14:45:49 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.463 14:45:49 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.463 14:45:49 -- setup/common.sh@32 -- # continue 00:05:16.463 14:45:49 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.463 14:45:49 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.463 14:45:49 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.463 14:45:49 -- setup/common.sh@32 -- # continue 00:05:16.463 14:45:49 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.463 14:45:49 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.463 14:45:49 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.463 14:45:49 -- setup/common.sh@32 -- # continue 00:05:16.463 14:45:49 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.463 14:45:49 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.463 14:45:49 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.463 14:45:49 -- setup/common.sh@32 -- # continue 00:05:16.463 14:45:49 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.463 14:45:49 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.463 14:45:49 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.463 14:45:49 -- setup/common.sh@32 -- # continue 00:05:16.463 14:45:49 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.463 14:45:49 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.463 14:45:49 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.463 14:45:49 -- setup/common.sh@32 -- # continue 00:05:16.463 14:45:49 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.463 14:45:49 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.463 14:45:49 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.463 14:45:49 -- setup/common.sh@32 -- # continue 00:05:16.463 14:45:49 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.463 14:45:49 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.463 14:45:49 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.463 14:45:49 -- setup/common.sh@32 -- # continue 00:05:16.463 14:45:49 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.463 14:45:49 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.463 14:45:49 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.463 14:45:49 -- setup/common.sh@32 -- # continue 00:05:16.463 14:45:49 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.463 14:45:49 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.463 14:45:49 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.463 14:45:49 -- setup/common.sh@32 -- # continue 00:05:16.463 14:45:49 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.463 14:45:49 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.463 14:45:49 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.463 14:45:49 -- setup/common.sh@32 -- # continue 00:05:16.463 14:45:49 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.463 14:45:49 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.463 14:45:49 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.463 14:45:49 -- setup/common.sh@32 -- # continue 00:05:16.463 14:45:49 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.463 14:45:49 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.463 14:45:49 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.463 14:45:49 -- setup/common.sh@32 -- # continue 00:05:16.463 14:45:49 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.463 14:45:49 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.463 14:45:49 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.463 14:45:49 -- setup/common.sh@32 -- # continue 00:05:16.463 14:45:49 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.463 14:45:49 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.463 14:45:49 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.463 14:45:49 -- setup/common.sh@32 -- # continue 00:05:16.463 14:45:49 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.463 14:45:49 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.463 14:45:49 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.463 14:45:49 -- setup/common.sh@32 -- # continue 00:05:16.463 14:45:49 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.463 14:45:49 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.463 14:45:49 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.463 14:45:49 -- setup/common.sh@32 -- # continue 00:05:16.463 14:45:49 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.463 14:45:49 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.463 14:45:49 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.463 14:45:49 -- setup/common.sh@32 -- # continue 00:05:16.463 14:45:49 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.463 14:45:49 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.463 14:45:49 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.463 14:45:49 -- setup/common.sh@32 -- # continue 00:05:16.463 14:45:49 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.463 14:45:49 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.463 14:45:49 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.463 14:45:49 -- setup/common.sh@32 -- # continue 00:05:16.463 14:45:49 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.463 14:45:49 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.463 14:45:49 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.463 14:45:49 -- setup/common.sh@32 -- # continue 00:05:16.463 14:45:49 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.463 14:45:49 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.463 14:45:49 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.463 14:45:49 -- setup/common.sh@32 -- # continue 00:05:16.463 14:45:49 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.463 14:45:49 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.463 14:45:49 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.463 14:45:49 -- setup/common.sh@32 -- # continue 00:05:16.463 14:45:49 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.463 14:45:49 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.463 14:45:49 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.463 14:45:49 -- setup/common.sh@32 -- # continue 00:05:16.463 14:45:49 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.463 14:45:49 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.463 14:45:49 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.463 14:45:49 -- setup/common.sh@33 -- # echo 0 00:05:16.463 14:45:49 -- setup/common.sh@33 -- # return 0 00:05:16.463 14:45:49 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:05:16.463 14:45:49 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:05:16.463 14:45:49 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:05:16.463 14:45:49 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:05:16.463 node0=1024 expecting 1024 00:05:16.463 14:45:49 -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:05:16.463 14:45:49 -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:05:16.463 00:05:16.463 real 0m0.575s 00:05:16.463 user 0m0.264s 00:05:16.463 sys 0m0.346s 00:05:16.463 14:45:49 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:16.463 14:45:49 -- common/autotest_common.sh@10 -- # set +x 00:05:16.463 ************************************ 00:05:16.463 END TEST even_2G_alloc 00:05:16.463 ************************************ 00:05:16.463 14:45:49 -- setup/hugepages.sh@213 -- # run_test odd_alloc odd_alloc 00:05:16.463 14:45:49 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:16.463 14:45:49 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:16.463 14:45:49 -- common/autotest_common.sh@10 -- # set +x 00:05:16.463 ************************************ 00:05:16.463 START TEST odd_alloc 00:05:16.463 ************************************ 00:05:16.463 14:45:49 -- common/autotest_common.sh@1114 -- # odd_alloc 00:05:16.463 14:45:49 -- setup/hugepages.sh@159 -- # get_test_nr_hugepages 2098176 00:05:16.463 14:45:49 -- setup/hugepages.sh@49 -- # local size=2098176 00:05:16.463 14:45:49 -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:05:16.463 14:45:49 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:05:16.463 14:45:49 -- setup/hugepages.sh@57 -- # nr_hugepages=1025 00:05:16.463 14:45:49 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:05:16.464 14:45:49 -- setup/hugepages.sh@62 -- # user_nodes=() 00:05:16.464 14:45:49 -- setup/hugepages.sh@62 -- # local user_nodes 00:05:16.464 14:45:49 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1025 00:05:16.464 14:45:49 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:05:16.464 14:45:49 -- setup/hugepages.sh@67 -- # nodes_test=() 00:05:16.464 14:45:49 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:05:16.464 14:45:49 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:05:16.464 14:45:49 -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:05:16.464 14:45:49 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:05:16.464 14:45:49 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=1025 00:05:16.464 14:45:49 -- setup/hugepages.sh@83 -- # : 0 00:05:16.464 14:45:49 -- setup/hugepages.sh@84 -- # : 0 00:05:16.464 14:45:49 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:05:16.464 14:45:49 -- setup/hugepages.sh@160 -- # HUGEMEM=2049 00:05:16.464 14:45:49 -- setup/hugepages.sh@160 -- # HUGE_EVEN_ALLOC=yes 00:05:16.464 14:45:49 -- setup/hugepages.sh@160 -- # setup output 00:05:16.464 14:45:49 -- setup/common.sh@9 -- # [[ output == output ]] 00:05:16.464 14:45:49 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:17.056 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:17.056 0000:00:06.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:17.056 0000:00:07.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:17.056 14:45:49 -- setup/hugepages.sh@161 -- # verify_nr_hugepages 00:05:17.056 14:45:49 -- setup/hugepages.sh@89 -- # local node 00:05:17.056 14:45:49 -- setup/hugepages.sh@90 -- # local sorted_t 00:05:17.056 14:45:49 -- setup/hugepages.sh@91 -- # local sorted_s 00:05:17.056 14:45:49 -- setup/hugepages.sh@92 -- # local surp 00:05:17.056 14:45:49 -- setup/hugepages.sh@93 -- # local resv 00:05:17.056 14:45:49 -- setup/hugepages.sh@94 -- # local anon 00:05:17.056 14:45:49 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:05:17.056 14:45:49 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:05:17.056 14:45:49 -- setup/common.sh@17 -- # local get=AnonHugePages 00:05:17.056 14:45:49 -- setup/common.sh@18 -- # local node= 00:05:17.056 14:45:49 -- setup/common.sh@19 -- # local var val 00:05:17.056 14:45:49 -- setup/common.sh@20 -- # local mem_f mem 00:05:17.056 14:45:49 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:17.056 14:45:49 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:17.056 14:45:49 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:17.056 14:45:49 -- setup/common.sh@28 -- # mapfile -t mem 00:05:17.056 14:45:49 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:17.056 14:45:49 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.056 14:45:49 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.056 14:45:49 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239108 kB' 'MemFree: 6516336 kB' 'MemAvailable: 9445932 kB' 'Buffers: 2684 kB' 'Cached: 3130196 kB' 'SwapCached: 0 kB' 'Active: 498084 kB' 'Inactive: 2753240 kB' 'Active(anon): 128932 kB' 'Inactive(anon): 0 kB' 'Active(file): 369152 kB' 'Inactive(file): 2753240 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'AnonPages: 120088 kB' 'Mapped: 50836 kB' 'Shmem: 10488 kB' 'KReclaimable: 88300 kB' 'Slab: 191580 kB' 'SReclaimable: 88300 kB' 'SUnreclaim: 103280 kB' 'KernelStack: 6776 kB' 'PageTables: 4424 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13458556 kB' 'Committed_AS: 323252 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55512 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 206700 kB' 'DirectMap2M: 6084608 kB' 'DirectMap1G: 8388608 kB' 00:05:17.056 14:45:49 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:17.056 14:45:49 -- setup/common.sh@32 -- # continue 00:05:17.056 14:45:49 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.056 14:45:49 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.056 14:45:49 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:17.056 14:45:49 -- setup/common.sh@32 -- # continue 00:05:17.056 14:45:49 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.056 14:45:49 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.056 14:45:49 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:17.056 14:45:49 -- setup/common.sh@32 -- # continue 00:05:17.056 14:45:49 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.056 14:45:49 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.056 14:45:49 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:17.056 14:45:49 -- setup/common.sh@32 -- # continue 00:05:17.056 14:45:49 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.056 14:45:49 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.056 14:45:49 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:17.056 14:45:49 -- setup/common.sh@32 -- # continue 00:05:17.056 14:45:49 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.056 14:45:49 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.056 14:45:49 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:17.056 14:45:49 -- setup/common.sh@32 -- # continue 00:05:17.056 14:45:49 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.056 14:45:49 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.056 14:45:49 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:17.056 14:45:49 -- setup/common.sh@32 -- # continue 00:05:17.056 14:45:49 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.056 14:45:49 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.056 14:45:49 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:17.056 14:45:49 -- setup/common.sh@32 -- # continue 00:05:17.056 14:45:49 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.056 14:45:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.056 14:45:50 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:17.056 14:45:50 -- setup/common.sh@32 -- # continue 00:05:17.056 14:45:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.056 14:45:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.056 14:45:50 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:17.056 14:45:50 -- setup/common.sh@32 -- # continue 00:05:17.056 14:45:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.056 14:45:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.056 14:45:50 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:17.056 14:45:50 -- setup/common.sh@32 -- # continue 00:05:17.056 14:45:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.056 14:45:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.056 14:45:50 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:17.056 14:45:50 -- setup/common.sh@32 -- # continue 00:05:17.056 14:45:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.056 14:45:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.056 14:45:50 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:17.056 14:45:50 -- setup/common.sh@32 -- # continue 00:05:17.056 14:45:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.056 14:45:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.056 14:45:50 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:17.056 14:45:50 -- setup/common.sh@32 -- # continue 00:05:17.056 14:45:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.056 14:45:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.056 14:45:50 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:17.056 14:45:50 -- setup/common.sh@32 -- # continue 00:05:17.056 14:45:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.056 14:45:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.056 14:45:50 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:17.056 14:45:50 -- setup/common.sh@32 -- # continue 00:05:17.056 14:45:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.056 14:45:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.056 14:45:50 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:17.056 14:45:50 -- setup/common.sh@32 -- # continue 00:05:17.056 14:45:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.056 14:45:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.056 14:45:50 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:17.056 14:45:50 -- setup/common.sh@32 -- # continue 00:05:17.056 14:45:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.056 14:45:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.056 14:45:50 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:17.056 14:45:50 -- setup/common.sh@32 -- # continue 00:05:17.056 14:45:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.056 14:45:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.056 14:45:50 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:17.056 14:45:50 -- setup/common.sh@32 -- # continue 00:05:17.056 14:45:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.056 14:45:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.056 14:45:50 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:17.056 14:45:50 -- setup/common.sh@32 -- # continue 00:05:17.056 14:45:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.056 14:45:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.056 14:45:50 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:17.056 14:45:50 -- setup/common.sh@32 -- # continue 00:05:17.056 14:45:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.056 14:45:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.057 14:45:50 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:17.057 14:45:50 -- setup/common.sh@32 -- # continue 00:05:17.057 14:45:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.057 14:45:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.057 14:45:50 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:17.057 14:45:50 -- setup/common.sh@32 -- # continue 00:05:17.057 14:45:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.057 14:45:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.057 14:45:50 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:17.057 14:45:50 -- setup/common.sh@32 -- # continue 00:05:17.057 14:45:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.057 14:45:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.057 14:45:50 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:17.057 14:45:50 -- setup/common.sh@32 -- # continue 00:05:17.057 14:45:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.057 14:45:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.057 14:45:50 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:17.057 14:45:50 -- setup/common.sh@32 -- # continue 00:05:17.057 14:45:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.057 14:45:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.057 14:45:50 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:17.057 14:45:50 -- setup/common.sh@32 -- # continue 00:05:17.057 14:45:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.057 14:45:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.057 14:45:50 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:17.057 14:45:50 -- setup/common.sh@32 -- # continue 00:05:17.057 14:45:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.057 14:45:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.057 14:45:50 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:17.057 14:45:50 -- setup/common.sh@32 -- # continue 00:05:17.057 14:45:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.057 14:45:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.057 14:45:50 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:17.057 14:45:50 -- setup/common.sh@32 -- # continue 00:05:17.057 14:45:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.057 14:45:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.057 14:45:50 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:17.057 14:45:50 -- setup/common.sh@32 -- # continue 00:05:17.057 14:45:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.057 14:45:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.057 14:45:50 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:17.057 14:45:50 -- setup/common.sh@32 -- # continue 00:05:17.057 14:45:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.057 14:45:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.057 14:45:50 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:17.057 14:45:50 -- setup/common.sh@32 -- # continue 00:05:17.057 14:45:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.057 14:45:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.057 14:45:50 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:17.057 14:45:50 -- setup/common.sh@32 -- # continue 00:05:17.057 14:45:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.057 14:45:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.057 14:45:50 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:17.057 14:45:50 -- setup/common.sh@32 -- # continue 00:05:17.057 14:45:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.057 14:45:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.057 14:45:50 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:17.057 14:45:50 -- setup/common.sh@32 -- # continue 00:05:17.057 14:45:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.057 14:45:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.057 14:45:50 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:17.057 14:45:50 -- setup/common.sh@32 -- # continue 00:05:17.057 14:45:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.057 14:45:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.057 14:45:50 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:17.057 14:45:50 -- setup/common.sh@32 -- # continue 00:05:17.057 14:45:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.057 14:45:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.057 14:45:50 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:17.057 14:45:50 -- setup/common.sh@32 -- # continue 00:05:17.057 14:45:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.057 14:45:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.057 14:45:50 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:17.057 14:45:50 -- setup/common.sh@33 -- # echo 0 00:05:17.057 14:45:50 -- setup/common.sh@33 -- # return 0 00:05:17.057 14:45:50 -- setup/hugepages.sh@97 -- # anon=0 00:05:17.057 14:45:50 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:05:17.057 14:45:50 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:17.057 14:45:50 -- setup/common.sh@18 -- # local node= 00:05:17.057 14:45:50 -- setup/common.sh@19 -- # local var val 00:05:17.057 14:45:50 -- setup/common.sh@20 -- # local mem_f mem 00:05:17.057 14:45:50 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:17.057 14:45:50 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:17.057 14:45:50 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:17.057 14:45:50 -- setup/common.sh@28 -- # mapfile -t mem 00:05:17.057 14:45:50 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:17.057 14:45:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.057 14:45:50 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239108 kB' 'MemFree: 6516336 kB' 'MemAvailable: 9445932 kB' 'Buffers: 2684 kB' 'Cached: 3130196 kB' 'SwapCached: 0 kB' 'Active: 497688 kB' 'Inactive: 2753240 kB' 'Active(anon): 128536 kB' 'Inactive(anon): 0 kB' 'Active(file): 369152 kB' 'Inactive(file): 2753240 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'AnonPages: 119664 kB' 'Mapped: 50764 kB' 'Shmem: 10488 kB' 'KReclaimable: 88300 kB' 'Slab: 191616 kB' 'SReclaimable: 88300 kB' 'SUnreclaim: 103316 kB' 'KernelStack: 6768 kB' 'PageTables: 4492 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13458556 kB' 'Committed_AS: 323252 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55512 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 206700 kB' 'DirectMap2M: 6084608 kB' 'DirectMap1G: 8388608 kB' 00:05:17.057 14:45:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.057 14:45:50 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.057 14:45:50 -- setup/common.sh@32 -- # continue 00:05:17.057 14:45:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.057 14:45:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.057 14:45:50 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.057 14:45:50 -- setup/common.sh@32 -- # continue 00:05:17.057 14:45:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.057 14:45:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.057 14:45:50 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.057 14:45:50 -- setup/common.sh@32 -- # continue 00:05:17.057 14:45:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.057 14:45:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.057 14:45:50 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.057 14:45:50 -- setup/common.sh@32 -- # continue 00:05:17.057 14:45:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.057 14:45:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.057 14:45:50 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.057 14:45:50 -- setup/common.sh@32 -- # continue 00:05:17.057 14:45:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.057 14:45:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.057 14:45:50 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.057 14:45:50 -- setup/common.sh@32 -- # continue 00:05:17.057 14:45:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.057 14:45:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.057 14:45:50 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.057 14:45:50 -- setup/common.sh@32 -- # continue 00:05:17.057 14:45:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.057 14:45:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.057 14:45:50 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.057 14:45:50 -- setup/common.sh@32 -- # continue 00:05:17.057 14:45:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.057 14:45:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.057 14:45:50 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.057 14:45:50 -- setup/common.sh@32 -- # continue 00:05:17.057 14:45:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.057 14:45:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.057 14:45:50 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.057 14:45:50 -- setup/common.sh@32 -- # continue 00:05:17.057 14:45:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.057 14:45:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.057 14:45:50 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.057 14:45:50 -- setup/common.sh@32 -- # continue 00:05:17.057 14:45:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.057 14:45:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.057 14:45:50 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.057 14:45:50 -- setup/common.sh@32 -- # continue 00:05:17.057 14:45:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.057 14:45:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.057 14:45:50 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.057 14:45:50 -- setup/common.sh@32 -- # continue 00:05:17.057 14:45:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.057 14:45:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.057 14:45:50 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.057 14:45:50 -- setup/common.sh@32 -- # continue 00:05:17.057 14:45:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.057 14:45:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.057 14:45:50 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.058 14:45:50 -- setup/common.sh@32 -- # continue 00:05:17.058 14:45:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.058 14:45:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.058 14:45:50 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.058 14:45:50 -- setup/common.sh@32 -- # continue 00:05:17.058 14:45:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.058 14:45:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.058 14:45:50 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.058 14:45:50 -- setup/common.sh@32 -- # continue 00:05:17.058 14:45:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.058 14:45:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.058 14:45:50 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.058 14:45:50 -- setup/common.sh@32 -- # continue 00:05:17.058 14:45:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.058 14:45:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.058 14:45:50 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.058 14:45:50 -- setup/common.sh@32 -- # continue 00:05:17.058 14:45:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.058 14:45:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.058 14:45:50 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.058 14:45:50 -- setup/common.sh@32 -- # continue 00:05:17.058 14:45:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.058 14:45:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.058 14:45:50 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.058 14:45:50 -- setup/common.sh@32 -- # continue 00:05:17.058 14:45:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.058 14:45:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.058 14:45:50 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.058 14:45:50 -- setup/common.sh@32 -- # continue 00:05:17.058 14:45:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.058 14:45:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.058 14:45:50 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.058 14:45:50 -- setup/common.sh@32 -- # continue 00:05:17.058 14:45:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.058 14:45:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.058 14:45:50 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.058 14:45:50 -- setup/common.sh@32 -- # continue 00:05:17.058 14:45:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.058 14:45:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.058 14:45:50 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.058 14:45:50 -- setup/common.sh@32 -- # continue 00:05:17.058 14:45:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.058 14:45:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.058 14:45:50 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.058 14:45:50 -- setup/common.sh@32 -- # continue 00:05:17.058 14:45:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.058 14:45:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.058 14:45:50 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.058 14:45:50 -- setup/common.sh@32 -- # continue 00:05:17.058 14:45:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.058 14:45:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.058 14:45:50 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.058 14:45:50 -- setup/common.sh@32 -- # continue 00:05:17.058 14:45:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.058 14:45:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.058 14:45:50 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.058 14:45:50 -- setup/common.sh@32 -- # continue 00:05:17.058 14:45:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.058 14:45:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.058 14:45:50 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.058 14:45:50 -- setup/common.sh@32 -- # continue 00:05:17.058 14:45:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.058 14:45:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.058 14:45:50 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.058 14:45:50 -- setup/common.sh@32 -- # continue 00:05:17.058 14:45:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.058 14:45:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.058 14:45:50 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.058 14:45:50 -- setup/common.sh@32 -- # continue 00:05:17.058 14:45:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.058 14:45:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.058 14:45:50 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.058 14:45:50 -- setup/common.sh@32 -- # continue 00:05:17.058 14:45:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.058 14:45:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.058 14:45:50 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.058 14:45:50 -- setup/common.sh@32 -- # continue 00:05:17.058 14:45:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.058 14:45:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.058 14:45:50 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.058 14:45:50 -- setup/common.sh@32 -- # continue 00:05:17.058 14:45:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.058 14:45:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.058 14:45:50 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.058 14:45:50 -- setup/common.sh@32 -- # continue 00:05:17.058 14:45:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.058 14:45:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.058 14:45:50 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.058 14:45:50 -- setup/common.sh@32 -- # continue 00:05:17.058 14:45:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.058 14:45:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.058 14:45:50 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.058 14:45:50 -- setup/common.sh@32 -- # continue 00:05:17.058 14:45:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.058 14:45:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.058 14:45:50 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.058 14:45:50 -- setup/common.sh@32 -- # continue 00:05:17.058 14:45:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.058 14:45:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.058 14:45:50 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.058 14:45:50 -- setup/common.sh@32 -- # continue 00:05:17.058 14:45:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.058 14:45:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.058 14:45:50 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.058 14:45:50 -- setup/common.sh@32 -- # continue 00:05:17.058 14:45:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.058 14:45:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.058 14:45:50 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.058 14:45:50 -- setup/common.sh@32 -- # continue 00:05:17.058 14:45:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.058 14:45:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.058 14:45:50 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.058 14:45:50 -- setup/common.sh@32 -- # continue 00:05:17.058 14:45:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.058 14:45:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.058 14:45:50 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.058 14:45:50 -- setup/common.sh@32 -- # continue 00:05:17.058 14:45:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.058 14:45:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.058 14:45:50 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.058 14:45:50 -- setup/common.sh@32 -- # continue 00:05:17.058 14:45:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.058 14:45:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.058 14:45:50 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.058 14:45:50 -- setup/common.sh@32 -- # continue 00:05:17.058 14:45:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.058 14:45:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.058 14:45:50 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.058 14:45:50 -- setup/common.sh@32 -- # continue 00:05:17.058 14:45:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.058 14:45:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.058 14:45:50 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.058 14:45:50 -- setup/common.sh@32 -- # continue 00:05:17.058 14:45:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.058 14:45:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.058 14:45:50 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.058 14:45:50 -- setup/common.sh@32 -- # continue 00:05:17.058 14:45:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.058 14:45:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.058 14:45:50 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.058 14:45:50 -- setup/common.sh@32 -- # continue 00:05:17.058 14:45:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.058 14:45:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.058 14:45:50 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.058 14:45:50 -- setup/common.sh@32 -- # continue 00:05:17.058 14:45:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.058 14:45:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.058 14:45:50 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.058 14:45:50 -- setup/common.sh@33 -- # echo 0 00:05:17.058 14:45:50 -- setup/common.sh@33 -- # return 0 00:05:17.058 14:45:50 -- setup/hugepages.sh@99 -- # surp=0 00:05:17.058 14:45:50 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:05:17.058 14:45:50 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:05:17.058 14:45:50 -- setup/common.sh@18 -- # local node= 00:05:17.058 14:45:50 -- setup/common.sh@19 -- # local var val 00:05:17.058 14:45:50 -- setup/common.sh@20 -- # local mem_f mem 00:05:17.058 14:45:50 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:17.058 14:45:50 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:17.058 14:45:50 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:17.058 14:45:50 -- setup/common.sh@28 -- # mapfile -t mem 00:05:17.058 14:45:50 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:17.058 14:45:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.058 14:45:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.059 14:45:50 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239108 kB' 'MemFree: 6516336 kB' 'MemAvailable: 9445932 kB' 'Buffers: 2684 kB' 'Cached: 3130196 kB' 'SwapCached: 0 kB' 'Active: 497588 kB' 'Inactive: 2753240 kB' 'Active(anon): 128436 kB' 'Inactive(anon): 0 kB' 'Active(file): 369152 kB' 'Inactive(file): 2753240 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'AnonPages: 119564 kB' 'Mapped: 50764 kB' 'Shmem: 10488 kB' 'KReclaimable: 88300 kB' 'Slab: 191604 kB' 'SReclaimable: 88300 kB' 'SUnreclaim: 103304 kB' 'KernelStack: 6752 kB' 'PageTables: 4440 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13458556 kB' 'Committed_AS: 323252 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55512 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 206700 kB' 'DirectMap2M: 6084608 kB' 'DirectMap1G: 8388608 kB' 00:05:17.059 14:45:50 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.059 14:45:50 -- setup/common.sh@32 -- # continue 00:05:17.059 14:45:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.059 14:45:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.059 14:45:50 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.059 14:45:50 -- setup/common.sh@32 -- # continue 00:05:17.059 14:45:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.059 14:45:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.059 14:45:50 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.059 14:45:50 -- setup/common.sh@32 -- # continue 00:05:17.059 14:45:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.059 14:45:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.059 14:45:50 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.059 14:45:50 -- setup/common.sh@32 -- # continue 00:05:17.059 14:45:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.059 14:45:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.059 14:45:50 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.059 14:45:50 -- setup/common.sh@32 -- # continue 00:05:17.059 14:45:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.059 14:45:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.059 14:45:50 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.059 14:45:50 -- setup/common.sh@32 -- # continue 00:05:17.059 14:45:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.059 14:45:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.059 14:45:50 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.059 14:45:50 -- setup/common.sh@32 -- # continue 00:05:17.059 14:45:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.059 14:45:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.059 14:45:50 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.059 14:45:50 -- setup/common.sh@32 -- # continue 00:05:17.059 14:45:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.059 14:45:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.059 14:45:50 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.059 14:45:50 -- setup/common.sh@32 -- # continue 00:05:17.059 14:45:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.059 14:45:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.059 14:45:50 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.059 14:45:50 -- setup/common.sh@32 -- # continue 00:05:17.059 14:45:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.059 14:45:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.059 14:45:50 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.059 14:45:50 -- setup/common.sh@32 -- # continue 00:05:17.059 14:45:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.059 14:45:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.059 14:45:50 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.059 14:45:50 -- setup/common.sh@32 -- # continue 00:05:17.059 14:45:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.059 14:45:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.059 14:45:50 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.059 14:45:50 -- setup/common.sh@32 -- # continue 00:05:17.059 14:45:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.059 14:45:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.059 14:45:50 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.059 14:45:50 -- setup/common.sh@32 -- # continue 00:05:17.059 14:45:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.059 14:45:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.059 14:45:50 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.059 14:45:50 -- setup/common.sh@32 -- # continue 00:05:17.059 14:45:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.059 14:45:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.059 14:45:50 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.059 14:45:50 -- setup/common.sh@32 -- # continue 00:05:17.059 14:45:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.059 14:45:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.059 14:45:50 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.059 14:45:50 -- setup/common.sh@32 -- # continue 00:05:17.059 14:45:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.059 14:45:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.059 14:45:50 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.059 14:45:50 -- setup/common.sh@32 -- # continue 00:05:17.059 14:45:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.059 14:45:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.059 14:45:50 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.059 14:45:50 -- setup/common.sh@32 -- # continue 00:05:17.059 14:45:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.059 14:45:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.059 14:45:50 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.059 14:45:50 -- setup/common.sh@32 -- # continue 00:05:17.059 14:45:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.059 14:45:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.059 14:45:50 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.059 14:45:50 -- setup/common.sh@32 -- # continue 00:05:17.059 14:45:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.059 14:45:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.059 14:45:50 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.059 14:45:50 -- setup/common.sh@32 -- # continue 00:05:17.059 14:45:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.059 14:45:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.059 14:45:50 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.059 14:45:50 -- setup/common.sh@32 -- # continue 00:05:17.059 14:45:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.059 14:45:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.059 14:45:50 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.059 14:45:50 -- setup/common.sh@32 -- # continue 00:05:17.059 14:45:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.059 14:45:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.059 14:45:50 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.059 14:45:50 -- setup/common.sh@32 -- # continue 00:05:17.059 14:45:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.059 14:45:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.059 14:45:50 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.059 14:45:50 -- setup/common.sh@32 -- # continue 00:05:17.059 14:45:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.059 14:45:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.059 14:45:50 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.059 14:45:50 -- setup/common.sh@32 -- # continue 00:05:17.059 14:45:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.059 14:45:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.059 14:45:50 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.059 14:45:50 -- setup/common.sh@32 -- # continue 00:05:17.059 14:45:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.059 14:45:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.059 14:45:50 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.059 14:45:50 -- setup/common.sh@32 -- # continue 00:05:17.059 14:45:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.059 14:45:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.059 14:45:50 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.059 14:45:50 -- setup/common.sh@32 -- # continue 00:05:17.059 14:45:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.059 14:45:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.059 14:45:50 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.059 14:45:50 -- setup/common.sh@32 -- # continue 00:05:17.059 14:45:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.059 14:45:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.059 14:45:50 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.059 14:45:50 -- setup/common.sh@32 -- # continue 00:05:17.059 14:45:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.059 14:45:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.059 14:45:50 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.059 14:45:50 -- setup/common.sh@32 -- # continue 00:05:17.059 14:45:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.059 14:45:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.059 14:45:50 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.059 14:45:50 -- setup/common.sh@32 -- # continue 00:05:17.059 14:45:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.059 14:45:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.059 14:45:50 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.059 14:45:50 -- setup/common.sh@32 -- # continue 00:05:17.059 14:45:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.059 14:45:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.059 14:45:50 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.059 14:45:50 -- setup/common.sh@32 -- # continue 00:05:17.059 14:45:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.059 14:45:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.059 14:45:50 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.059 14:45:50 -- setup/common.sh@32 -- # continue 00:05:17.059 14:45:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.060 14:45:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.060 14:45:50 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.060 14:45:50 -- setup/common.sh@32 -- # continue 00:05:17.060 14:45:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.060 14:45:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.060 14:45:50 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.060 14:45:50 -- setup/common.sh@32 -- # continue 00:05:17.060 14:45:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.060 14:45:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.060 14:45:50 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.060 14:45:50 -- setup/common.sh@32 -- # continue 00:05:17.060 14:45:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.060 14:45:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.060 14:45:50 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.060 14:45:50 -- setup/common.sh@32 -- # continue 00:05:17.060 14:45:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.060 14:45:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.060 14:45:50 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.060 14:45:50 -- setup/common.sh@32 -- # continue 00:05:17.060 14:45:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.060 14:45:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.060 14:45:50 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.060 14:45:50 -- setup/common.sh@32 -- # continue 00:05:17.060 14:45:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.060 14:45:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.060 14:45:50 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.060 14:45:50 -- setup/common.sh@32 -- # continue 00:05:17.060 14:45:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.060 14:45:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.060 14:45:50 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.060 14:45:50 -- setup/common.sh@32 -- # continue 00:05:17.060 14:45:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.060 14:45:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.060 14:45:50 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.060 14:45:50 -- setup/common.sh@32 -- # continue 00:05:17.060 14:45:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.060 14:45:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.060 14:45:50 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.060 14:45:50 -- setup/common.sh@32 -- # continue 00:05:17.060 14:45:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.060 14:45:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.060 14:45:50 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.060 14:45:50 -- setup/common.sh@32 -- # continue 00:05:17.060 14:45:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.060 14:45:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.060 14:45:50 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.060 14:45:50 -- setup/common.sh@32 -- # continue 00:05:17.060 14:45:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.060 14:45:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.060 14:45:50 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.060 14:45:50 -- setup/common.sh@32 -- # continue 00:05:17.060 14:45:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.060 14:45:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.060 14:45:50 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.060 14:45:50 -- setup/common.sh@33 -- # echo 0 00:05:17.060 14:45:50 -- setup/common.sh@33 -- # return 0 00:05:17.060 14:45:50 -- setup/hugepages.sh@100 -- # resv=0 00:05:17.060 nr_hugepages=1025 00:05:17.060 resv_hugepages=0 00:05:17.060 surplus_hugepages=0 00:05:17.060 anon_hugepages=0 00:05:17.060 14:45:50 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1025 00:05:17.060 14:45:50 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:05:17.060 14:45:50 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:05:17.060 14:45:50 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:05:17.060 14:45:50 -- setup/hugepages.sh@107 -- # (( 1025 == nr_hugepages + surp + resv )) 00:05:17.060 14:45:50 -- setup/hugepages.sh@109 -- # (( 1025 == nr_hugepages )) 00:05:17.060 14:45:50 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:05:17.060 14:45:50 -- setup/common.sh@17 -- # local get=HugePages_Total 00:05:17.060 14:45:50 -- setup/common.sh@18 -- # local node= 00:05:17.060 14:45:50 -- setup/common.sh@19 -- # local var val 00:05:17.060 14:45:50 -- setup/common.sh@20 -- # local mem_f mem 00:05:17.060 14:45:50 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:17.060 14:45:50 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:17.060 14:45:50 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:17.060 14:45:50 -- setup/common.sh@28 -- # mapfile -t mem 00:05:17.060 14:45:50 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:17.060 14:45:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.060 14:45:50 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239108 kB' 'MemFree: 6516336 kB' 'MemAvailable: 9445932 kB' 'Buffers: 2684 kB' 'Cached: 3130196 kB' 'SwapCached: 0 kB' 'Active: 497584 kB' 'Inactive: 2753240 kB' 'Active(anon): 128432 kB' 'Inactive(anon): 0 kB' 'Active(file): 369152 kB' 'Inactive(file): 2753240 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'AnonPages: 119556 kB' 'Mapped: 50764 kB' 'Shmem: 10488 kB' 'KReclaimable: 88300 kB' 'Slab: 191600 kB' 'SReclaimable: 88300 kB' 'SUnreclaim: 103300 kB' 'KernelStack: 6752 kB' 'PageTables: 4440 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13458556 kB' 'Committed_AS: 323252 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55512 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 206700 kB' 'DirectMap2M: 6084608 kB' 'DirectMap1G: 8388608 kB' 00:05:17.060 14:45:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.060 14:45:50 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.060 14:45:50 -- setup/common.sh@32 -- # continue 00:05:17.060 14:45:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.060 14:45:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.060 14:45:50 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.060 14:45:50 -- setup/common.sh@32 -- # continue 00:05:17.060 14:45:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.060 14:45:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.060 14:45:50 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.060 14:45:50 -- setup/common.sh@32 -- # continue 00:05:17.060 14:45:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.060 14:45:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.060 14:45:50 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.060 14:45:50 -- setup/common.sh@32 -- # continue 00:05:17.060 14:45:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.060 14:45:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.060 14:45:50 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.060 14:45:50 -- setup/common.sh@32 -- # continue 00:05:17.060 14:45:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.060 14:45:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.060 14:45:50 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.060 14:45:50 -- setup/common.sh@32 -- # continue 00:05:17.060 14:45:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.060 14:45:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.060 14:45:50 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.060 14:45:50 -- setup/common.sh@32 -- # continue 00:05:17.060 14:45:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.060 14:45:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.060 14:45:50 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.060 14:45:50 -- setup/common.sh@32 -- # continue 00:05:17.060 14:45:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.060 14:45:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.060 14:45:50 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.060 14:45:50 -- setup/common.sh@32 -- # continue 00:05:17.060 14:45:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.060 14:45:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.060 14:45:50 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.060 14:45:50 -- setup/common.sh@32 -- # continue 00:05:17.060 14:45:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.060 14:45:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.060 14:45:50 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.060 14:45:50 -- setup/common.sh@32 -- # continue 00:05:17.061 14:45:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.061 14:45:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.061 14:45:50 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.061 14:45:50 -- setup/common.sh@32 -- # continue 00:05:17.061 14:45:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.061 14:45:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.061 14:45:50 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.061 14:45:50 -- setup/common.sh@32 -- # continue 00:05:17.061 14:45:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.061 14:45:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.061 14:45:50 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.061 14:45:50 -- setup/common.sh@32 -- # continue 00:05:17.061 14:45:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.061 14:45:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.061 14:45:50 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.061 14:45:50 -- setup/common.sh@32 -- # continue 00:05:17.061 14:45:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.061 14:45:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.061 14:45:50 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.061 14:45:50 -- setup/common.sh@32 -- # continue 00:05:17.061 14:45:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.061 14:45:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.061 14:45:50 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.061 14:45:50 -- setup/common.sh@32 -- # continue 00:05:17.061 14:45:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.061 14:45:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.061 14:45:50 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.061 14:45:50 -- setup/common.sh@32 -- # continue 00:05:17.061 14:45:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.061 14:45:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.061 14:45:50 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.061 14:45:50 -- setup/common.sh@32 -- # continue 00:05:17.061 14:45:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.061 14:45:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.061 14:45:50 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.061 14:45:50 -- setup/common.sh@32 -- # continue 00:05:17.061 14:45:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.061 14:45:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.061 14:45:50 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.061 14:45:50 -- setup/common.sh@32 -- # continue 00:05:17.061 14:45:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.061 14:45:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.061 14:45:50 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.061 14:45:50 -- setup/common.sh@32 -- # continue 00:05:17.061 14:45:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.061 14:45:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.061 14:45:50 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.061 14:45:50 -- setup/common.sh@32 -- # continue 00:05:17.061 14:45:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.061 14:45:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.061 14:45:50 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.061 14:45:50 -- setup/common.sh@32 -- # continue 00:05:17.061 14:45:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.061 14:45:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.061 14:45:50 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.061 14:45:50 -- setup/common.sh@32 -- # continue 00:05:17.061 14:45:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.061 14:45:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.061 14:45:50 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.061 14:45:50 -- setup/common.sh@32 -- # continue 00:05:17.061 14:45:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.061 14:45:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.061 14:45:50 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.061 14:45:50 -- setup/common.sh@32 -- # continue 00:05:17.061 14:45:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.061 14:45:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.061 14:45:50 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.061 14:45:50 -- setup/common.sh@32 -- # continue 00:05:17.061 14:45:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.061 14:45:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.061 14:45:50 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.061 14:45:50 -- setup/common.sh@32 -- # continue 00:05:17.061 14:45:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.061 14:45:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.061 14:45:50 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.061 14:45:50 -- setup/common.sh@32 -- # continue 00:05:17.061 14:45:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.061 14:45:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.061 14:45:50 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.061 14:45:50 -- setup/common.sh@32 -- # continue 00:05:17.061 14:45:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.061 14:45:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.061 14:45:50 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.061 14:45:50 -- setup/common.sh@32 -- # continue 00:05:17.061 14:45:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.061 14:45:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.061 14:45:50 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.061 14:45:50 -- setup/common.sh@32 -- # continue 00:05:17.061 14:45:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.061 14:45:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.061 14:45:50 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.061 14:45:50 -- setup/common.sh@32 -- # continue 00:05:17.061 14:45:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.061 14:45:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.061 14:45:50 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.061 14:45:50 -- setup/common.sh@32 -- # continue 00:05:17.061 14:45:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.061 14:45:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.061 14:45:50 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.061 14:45:50 -- setup/common.sh@32 -- # continue 00:05:17.061 14:45:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.061 14:45:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.061 14:45:50 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.061 14:45:50 -- setup/common.sh@32 -- # continue 00:05:17.061 14:45:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.061 14:45:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.061 14:45:50 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.061 14:45:50 -- setup/common.sh@32 -- # continue 00:05:17.061 14:45:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.061 14:45:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.061 14:45:50 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.061 14:45:50 -- setup/common.sh@32 -- # continue 00:05:17.061 14:45:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.061 14:45:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.061 14:45:50 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.061 14:45:50 -- setup/common.sh@32 -- # continue 00:05:17.061 14:45:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.061 14:45:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.061 14:45:50 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.061 14:45:50 -- setup/common.sh@32 -- # continue 00:05:17.061 14:45:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.061 14:45:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.061 14:45:50 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.061 14:45:50 -- setup/common.sh@32 -- # continue 00:05:17.061 14:45:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.061 14:45:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.061 14:45:50 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.061 14:45:50 -- setup/common.sh@32 -- # continue 00:05:17.061 14:45:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.061 14:45:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.061 14:45:50 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.061 14:45:50 -- setup/common.sh@32 -- # continue 00:05:17.061 14:45:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.061 14:45:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.061 14:45:50 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.061 14:45:50 -- setup/common.sh@32 -- # continue 00:05:17.061 14:45:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.061 14:45:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.061 14:45:50 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.061 14:45:50 -- setup/common.sh@32 -- # continue 00:05:17.061 14:45:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.061 14:45:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.061 14:45:50 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.061 14:45:50 -- setup/common.sh@32 -- # continue 00:05:17.061 14:45:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.061 14:45:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.061 14:45:50 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.061 14:45:50 -- setup/common.sh@32 -- # continue 00:05:17.061 14:45:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.061 14:45:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.061 14:45:50 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.061 14:45:50 -- setup/common.sh@33 -- # echo 1025 00:05:17.061 14:45:50 -- setup/common.sh@33 -- # return 0 00:05:17.061 14:45:50 -- setup/hugepages.sh@110 -- # (( 1025 == nr_hugepages + surp + resv )) 00:05:17.061 14:45:50 -- setup/hugepages.sh@112 -- # get_nodes 00:05:17.061 14:45:50 -- setup/hugepages.sh@27 -- # local node 00:05:17.061 14:45:50 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:17.061 14:45:50 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1025 00:05:17.061 14:45:50 -- setup/hugepages.sh@32 -- # no_nodes=1 00:05:17.061 14:45:50 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:05:17.061 14:45:50 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:05:17.061 14:45:50 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:05:17.062 14:45:50 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:05:17.062 14:45:50 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:17.062 14:45:50 -- setup/common.sh@18 -- # local node=0 00:05:17.062 14:45:50 -- setup/common.sh@19 -- # local var val 00:05:17.062 14:45:50 -- setup/common.sh@20 -- # local mem_f mem 00:05:17.062 14:45:50 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:17.062 14:45:50 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:05:17.062 14:45:50 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:05:17.062 14:45:50 -- setup/common.sh@28 -- # mapfile -t mem 00:05:17.062 14:45:50 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:17.062 14:45:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.062 14:45:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.062 14:45:50 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239108 kB' 'MemFree: 6516336 kB' 'MemUsed: 5722772 kB' 'SwapCached: 0 kB' 'Active: 497600 kB' 'Inactive: 2753240 kB' 'Active(anon): 128448 kB' 'Inactive(anon): 0 kB' 'Active(file): 369152 kB' 'Inactive(file): 2753240 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'FilePages: 3132880 kB' 'Mapped: 50764 kB' 'AnonPages: 119580 kB' 'Shmem: 10488 kB' 'KernelStack: 6752 kB' 'PageTables: 4440 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 88300 kB' 'Slab: 191600 kB' 'SReclaimable: 88300 kB' 'SUnreclaim: 103300 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Surp: 0' 00:05:17.062 14:45:50 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.062 14:45:50 -- setup/common.sh@32 -- # continue 00:05:17.062 14:45:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.062 14:45:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.062 14:45:50 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.062 14:45:50 -- setup/common.sh@32 -- # continue 00:05:17.062 14:45:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.062 14:45:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.062 14:45:50 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.062 14:45:50 -- setup/common.sh@32 -- # continue 00:05:17.062 14:45:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.062 14:45:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.062 14:45:50 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.062 14:45:50 -- setup/common.sh@32 -- # continue 00:05:17.062 14:45:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.062 14:45:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.062 14:45:50 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.062 14:45:50 -- setup/common.sh@32 -- # continue 00:05:17.062 14:45:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.062 14:45:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.062 14:45:50 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.062 14:45:50 -- setup/common.sh@32 -- # continue 00:05:17.062 14:45:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.062 14:45:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.062 14:45:50 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.062 14:45:50 -- setup/common.sh@32 -- # continue 00:05:17.062 14:45:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.062 14:45:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.062 14:45:50 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.062 14:45:50 -- setup/common.sh@32 -- # continue 00:05:17.062 14:45:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.062 14:45:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.062 14:45:50 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.062 14:45:50 -- setup/common.sh@32 -- # continue 00:05:17.062 14:45:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.062 14:45:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.062 14:45:50 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.062 14:45:50 -- setup/common.sh@32 -- # continue 00:05:17.062 14:45:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.062 14:45:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.062 14:45:50 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.062 14:45:50 -- setup/common.sh@32 -- # continue 00:05:17.062 14:45:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.062 14:45:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.062 14:45:50 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.062 14:45:50 -- setup/common.sh@32 -- # continue 00:05:17.062 14:45:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.062 14:45:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.062 14:45:50 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.062 14:45:50 -- setup/common.sh@32 -- # continue 00:05:17.062 14:45:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.062 14:45:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.062 14:45:50 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.062 14:45:50 -- setup/common.sh@32 -- # continue 00:05:17.062 14:45:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.062 14:45:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.062 14:45:50 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.062 14:45:50 -- setup/common.sh@32 -- # continue 00:05:17.062 14:45:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.062 14:45:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.062 14:45:50 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.062 14:45:50 -- setup/common.sh@32 -- # continue 00:05:17.062 14:45:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.062 14:45:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.062 14:45:50 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.062 14:45:50 -- setup/common.sh@32 -- # continue 00:05:17.062 14:45:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.062 14:45:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.062 14:45:50 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.062 14:45:50 -- setup/common.sh@32 -- # continue 00:05:17.062 14:45:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.062 14:45:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.062 14:45:50 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.062 14:45:50 -- setup/common.sh@32 -- # continue 00:05:17.062 14:45:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.062 14:45:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.062 14:45:50 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.062 14:45:50 -- setup/common.sh@32 -- # continue 00:05:17.062 14:45:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.062 14:45:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.062 14:45:50 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.062 14:45:50 -- setup/common.sh@32 -- # continue 00:05:17.062 14:45:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.062 14:45:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.062 14:45:50 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.062 14:45:50 -- setup/common.sh@32 -- # continue 00:05:17.062 14:45:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.062 14:45:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.062 14:45:50 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.062 14:45:50 -- setup/common.sh@32 -- # continue 00:05:17.062 14:45:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.062 14:45:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.062 14:45:50 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.062 14:45:50 -- setup/common.sh@32 -- # continue 00:05:17.062 14:45:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.062 14:45:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.062 14:45:50 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.062 14:45:50 -- setup/common.sh@32 -- # continue 00:05:17.062 14:45:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.062 14:45:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.062 14:45:50 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.062 14:45:50 -- setup/common.sh@32 -- # continue 00:05:17.062 14:45:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.062 14:45:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.062 14:45:50 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.062 14:45:50 -- setup/common.sh@32 -- # continue 00:05:17.062 14:45:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.062 14:45:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.062 14:45:50 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.062 14:45:50 -- setup/common.sh@32 -- # continue 00:05:17.062 14:45:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.062 14:45:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.062 14:45:50 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.062 14:45:50 -- setup/common.sh@32 -- # continue 00:05:17.062 14:45:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.062 14:45:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.062 14:45:50 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.062 14:45:50 -- setup/common.sh@32 -- # continue 00:05:17.062 14:45:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.062 14:45:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.062 14:45:50 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.062 14:45:50 -- setup/common.sh@32 -- # continue 00:05:17.062 14:45:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.062 14:45:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.062 14:45:50 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.062 14:45:50 -- setup/common.sh@32 -- # continue 00:05:17.062 14:45:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.062 14:45:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.062 14:45:50 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.062 14:45:50 -- setup/common.sh@32 -- # continue 00:05:17.062 14:45:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.062 14:45:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.062 14:45:50 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.062 14:45:50 -- setup/common.sh@32 -- # continue 00:05:17.062 14:45:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.062 14:45:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.062 14:45:50 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.062 14:45:50 -- setup/common.sh@32 -- # continue 00:05:17.062 14:45:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.062 14:45:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.063 14:45:50 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.063 14:45:50 -- setup/common.sh@32 -- # continue 00:05:17.063 14:45:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.063 14:45:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.063 14:45:50 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.063 14:45:50 -- setup/common.sh@33 -- # echo 0 00:05:17.063 14:45:50 -- setup/common.sh@33 -- # return 0 00:05:17.063 14:45:50 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:05:17.063 14:45:50 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:05:17.063 14:45:50 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:05:17.063 14:45:50 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:05:17.063 node0=1025 expecting 1025 00:05:17.063 ************************************ 00:05:17.063 END TEST odd_alloc 00:05:17.063 ************************************ 00:05:17.063 14:45:50 -- setup/hugepages.sh@128 -- # echo 'node0=1025 expecting 1025' 00:05:17.063 14:45:50 -- setup/hugepages.sh@130 -- # [[ 1025 == \1\0\2\5 ]] 00:05:17.063 00:05:17.063 real 0m0.619s 00:05:17.063 user 0m0.307s 00:05:17.063 sys 0m0.334s 00:05:17.063 14:45:50 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:17.063 14:45:50 -- common/autotest_common.sh@10 -- # set +x 00:05:17.323 14:45:50 -- setup/hugepages.sh@214 -- # run_test custom_alloc custom_alloc 00:05:17.323 14:45:50 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:17.323 14:45:50 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:17.323 14:45:50 -- common/autotest_common.sh@10 -- # set +x 00:05:17.323 ************************************ 00:05:17.323 START TEST custom_alloc 00:05:17.323 ************************************ 00:05:17.323 14:45:50 -- common/autotest_common.sh@1114 -- # custom_alloc 00:05:17.323 14:45:50 -- setup/hugepages.sh@167 -- # local IFS=, 00:05:17.323 14:45:50 -- setup/hugepages.sh@169 -- # local node 00:05:17.323 14:45:50 -- setup/hugepages.sh@170 -- # nodes_hp=() 00:05:17.323 14:45:50 -- setup/hugepages.sh@170 -- # local nodes_hp 00:05:17.323 14:45:50 -- setup/hugepages.sh@172 -- # local nr_hugepages=0 _nr_hugepages=0 00:05:17.323 14:45:50 -- setup/hugepages.sh@174 -- # get_test_nr_hugepages 1048576 00:05:17.323 14:45:50 -- setup/hugepages.sh@49 -- # local size=1048576 00:05:17.323 14:45:50 -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:05:17.323 14:45:50 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:05:17.323 14:45:50 -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:05:17.323 14:45:50 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:05:17.323 14:45:50 -- setup/hugepages.sh@62 -- # user_nodes=() 00:05:17.323 14:45:50 -- setup/hugepages.sh@62 -- # local user_nodes 00:05:17.323 14:45:50 -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:05:17.323 14:45:50 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:05:17.323 14:45:50 -- setup/hugepages.sh@67 -- # nodes_test=() 00:05:17.323 14:45:50 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:05:17.323 14:45:50 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:05:17.323 14:45:50 -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:05:17.323 14:45:50 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:05:17.323 14:45:50 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:05:17.323 14:45:50 -- setup/hugepages.sh@83 -- # : 0 00:05:17.323 14:45:50 -- setup/hugepages.sh@84 -- # : 0 00:05:17.324 14:45:50 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:05:17.324 14:45:50 -- setup/hugepages.sh@175 -- # nodes_hp[0]=512 00:05:17.324 14:45:50 -- setup/hugepages.sh@176 -- # (( 1 > 1 )) 00:05:17.324 14:45:50 -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:05:17.324 14:45:50 -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:05:17.324 14:45:50 -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:05:17.324 14:45:50 -- setup/hugepages.sh@186 -- # get_test_nr_hugepages_per_node 00:05:17.324 14:45:50 -- setup/hugepages.sh@62 -- # user_nodes=() 00:05:17.324 14:45:50 -- setup/hugepages.sh@62 -- # local user_nodes 00:05:17.324 14:45:50 -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:05:17.324 14:45:50 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:05:17.324 14:45:50 -- setup/hugepages.sh@67 -- # nodes_test=() 00:05:17.324 14:45:50 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:05:17.324 14:45:50 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:05:17.324 14:45:50 -- setup/hugepages.sh@74 -- # (( 1 > 0 )) 00:05:17.324 14:45:50 -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:05:17.324 14:45:50 -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:05:17.324 14:45:50 -- setup/hugepages.sh@78 -- # return 0 00:05:17.324 14:45:50 -- setup/hugepages.sh@187 -- # HUGENODE='nodes_hp[0]=512' 00:05:17.324 14:45:50 -- setup/hugepages.sh@187 -- # setup output 00:05:17.324 14:45:50 -- setup/common.sh@9 -- # [[ output == output ]] 00:05:17.324 14:45:50 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:17.584 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:17.584 0000:00:06.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:17.584 0000:00:07.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:17.584 14:45:50 -- setup/hugepages.sh@188 -- # nr_hugepages=512 00:05:17.584 14:45:50 -- setup/hugepages.sh@188 -- # verify_nr_hugepages 00:05:17.584 14:45:50 -- setup/hugepages.sh@89 -- # local node 00:05:17.584 14:45:50 -- setup/hugepages.sh@90 -- # local sorted_t 00:05:17.584 14:45:50 -- setup/hugepages.sh@91 -- # local sorted_s 00:05:17.584 14:45:50 -- setup/hugepages.sh@92 -- # local surp 00:05:17.584 14:45:50 -- setup/hugepages.sh@93 -- # local resv 00:05:17.584 14:45:50 -- setup/hugepages.sh@94 -- # local anon 00:05:17.584 14:45:50 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:05:17.584 14:45:50 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:05:17.584 14:45:50 -- setup/common.sh@17 -- # local get=AnonHugePages 00:05:17.584 14:45:50 -- setup/common.sh@18 -- # local node= 00:05:17.584 14:45:50 -- setup/common.sh@19 -- # local var val 00:05:17.584 14:45:50 -- setup/common.sh@20 -- # local mem_f mem 00:05:17.584 14:45:50 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:17.584 14:45:50 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:17.584 14:45:50 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:17.584 14:45:50 -- setup/common.sh@28 -- # mapfile -t mem 00:05:17.584 14:45:50 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:17.584 14:45:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.584 14:45:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.584 14:45:50 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239108 kB' 'MemFree: 7572248 kB' 'MemAvailable: 10501828 kB' 'Buffers: 2684 kB' 'Cached: 3130196 kB' 'SwapCached: 0 kB' 'Active: 498024 kB' 'Inactive: 2753240 kB' 'Active(anon): 128872 kB' 'Inactive(anon): 0 kB' 'Active(file): 369152 kB' 'Inactive(file): 2753240 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'AnonPages: 119764 kB' 'Mapped: 50884 kB' 'Shmem: 10488 kB' 'KReclaimable: 88268 kB' 'Slab: 191520 kB' 'SReclaimable: 88268 kB' 'SUnreclaim: 103252 kB' 'KernelStack: 6744 kB' 'PageTables: 4536 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13983868 kB' 'Committed_AS: 323252 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55480 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 206700 kB' 'DirectMap2M: 6084608 kB' 'DirectMap1G: 8388608 kB' 00:05:17.584 14:45:50 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:17.584 14:45:50 -- setup/common.sh@32 -- # continue 00:05:17.584 14:45:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.584 14:45:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.584 14:45:50 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:17.584 14:45:50 -- setup/common.sh@32 -- # continue 00:05:17.584 14:45:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.584 14:45:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.584 14:45:50 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:17.584 14:45:50 -- setup/common.sh@32 -- # continue 00:05:17.584 14:45:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.584 14:45:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.584 14:45:50 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:17.584 14:45:50 -- setup/common.sh@32 -- # continue 00:05:17.584 14:45:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.584 14:45:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.584 14:45:50 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:17.584 14:45:50 -- setup/common.sh@32 -- # continue 00:05:17.584 14:45:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.584 14:45:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.584 14:45:50 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:17.584 14:45:50 -- setup/common.sh@32 -- # continue 00:05:17.584 14:45:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.584 14:45:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.584 14:45:50 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:17.584 14:45:50 -- setup/common.sh@32 -- # continue 00:05:17.584 14:45:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.584 14:45:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.584 14:45:50 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:17.584 14:45:50 -- setup/common.sh@32 -- # continue 00:05:17.584 14:45:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.584 14:45:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.584 14:45:50 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:17.584 14:45:50 -- setup/common.sh@32 -- # continue 00:05:17.584 14:45:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.584 14:45:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.584 14:45:50 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:17.584 14:45:50 -- setup/common.sh@32 -- # continue 00:05:17.584 14:45:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.584 14:45:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.584 14:45:50 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:17.584 14:45:50 -- setup/common.sh@32 -- # continue 00:05:17.584 14:45:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.584 14:45:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.584 14:45:50 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:17.584 14:45:50 -- setup/common.sh@32 -- # continue 00:05:17.584 14:45:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.584 14:45:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.584 14:45:50 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:17.584 14:45:50 -- setup/common.sh@32 -- # continue 00:05:17.584 14:45:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.584 14:45:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.584 14:45:50 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:17.584 14:45:50 -- setup/common.sh@32 -- # continue 00:05:17.584 14:45:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.584 14:45:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.585 14:45:50 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:17.585 14:45:50 -- setup/common.sh@32 -- # continue 00:05:17.585 14:45:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.585 14:45:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.585 14:45:50 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:17.585 14:45:50 -- setup/common.sh@32 -- # continue 00:05:17.585 14:45:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.585 14:45:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.585 14:45:50 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:17.585 14:45:50 -- setup/common.sh@32 -- # continue 00:05:17.585 14:45:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.585 14:45:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.585 14:45:50 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:17.585 14:45:50 -- setup/common.sh@32 -- # continue 00:05:17.585 14:45:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.585 14:45:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.585 14:45:50 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:17.585 14:45:50 -- setup/common.sh@32 -- # continue 00:05:17.585 14:45:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.585 14:45:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.585 14:45:50 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:17.585 14:45:50 -- setup/common.sh@32 -- # continue 00:05:17.585 14:45:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.585 14:45:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.585 14:45:50 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:17.585 14:45:50 -- setup/common.sh@32 -- # continue 00:05:17.585 14:45:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.585 14:45:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.585 14:45:50 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:17.585 14:45:50 -- setup/common.sh@32 -- # continue 00:05:17.585 14:45:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.585 14:45:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.585 14:45:50 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:17.585 14:45:50 -- setup/common.sh@32 -- # continue 00:05:17.585 14:45:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.585 14:45:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.585 14:45:50 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:17.585 14:45:50 -- setup/common.sh@32 -- # continue 00:05:17.585 14:45:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.585 14:45:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.585 14:45:50 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:17.585 14:45:50 -- setup/common.sh@32 -- # continue 00:05:17.585 14:45:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.585 14:45:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.585 14:45:50 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:17.585 14:45:50 -- setup/common.sh@32 -- # continue 00:05:17.585 14:45:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.585 14:45:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.585 14:45:50 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:17.585 14:45:50 -- setup/common.sh@32 -- # continue 00:05:17.585 14:45:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.585 14:45:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.585 14:45:50 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:17.585 14:45:50 -- setup/common.sh@32 -- # continue 00:05:17.585 14:45:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.585 14:45:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.585 14:45:50 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:17.585 14:45:50 -- setup/common.sh@32 -- # continue 00:05:17.585 14:45:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.585 14:45:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.585 14:45:50 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:17.585 14:45:50 -- setup/common.sh@32 -- # continue 00:05:17.585 14:45:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.585 14:45:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.585 14:45:50 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:17.585 14:45:50 -- setup/common.sh@32 -- # continue 00:05:17.585 14:45:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.585 14:45:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.585 14:45:50 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:17.585 14:45:50 -- setup/common.sh@32 -- # continue 00:05:17.585 14:45:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.585 14:45:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.585 14:45:50 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:17.585 14:45:50 -- setup/common.sh@32 -- # continue 00:05:17.585 14:45:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.585 14:45:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.585 14:45:50 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:17.585 14:45:50 -- setup/common.sh@32 -- # continue 00:05:17.585 14:45:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.585 14:45:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.585 14:45:50 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:17.585 14:45:50 -- setup/common.sh@32 -- # continue 00:05:17.585 14:45:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.585 14:45:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.585 14:45:50 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:17.585 14:45:50 -- setup/common.sh@32 -- # continue 00:05:17.585 14:45:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.585 14:45:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.585 14:45:50 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:17.585 14:45:50 -- setup/common.sh@32 -- # continue 00:05:17.585 14:45:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.585 14:45:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.585 14:45:50 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:17.585 14:45:50 -- setup/common.sh@32 -- # continue 00:05:17.585 14:45:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.585 14:45:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.585 14:45:50 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:17.585 14:45:50 -- setup/common.sh@32 -- # continue 00:05:17.585 14:45:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.585 14:45:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.585 14:45:50 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:17.585 14:45:50 -- setup/common.sh@32 -- # continue 00:05:17.585 14:45:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.585 14:45:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.585 14:45:50 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:17.585 14:45:50 -- setup/common.sh@33 -- # echo 0 00:05:17.585 14:45:50 -- setup/common.sh@33 -- # return 0 00:05:17.585 14:45:50 -- setup/hugepages.sh@97 -- # anon=0 00:05:17.585 14:45:50 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:05:17.585 14:45:50 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:17.585 14:45:50 -- setup/common.sh@18 -- # local node= 00:05:17.585 14:45:50 -- setup/common.sh@19 -- # local var val 00:05:17.585 14:45:50 -- setup/common.sh@20 -- # local mem_f mem 00:05:17.585 14:45:50 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:17.585 14:45:50 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:17.585 14:45:50 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:17.585 14:45:50 -- setup/common.sh@28 -- # mapfile -t mem 00:05:17.585 14:45:50 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:17.585 14:45:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.585 14:45:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.585 14:45:50 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239108 kB' 'MemFree: 7572248 kB' 'MemAvailable: 10501828 kB' 'Buffers: 2684 kB' 'Cached: 3130196 kB' 'SwapCached: 0 kB' 'Active: 497868 kB' 'Inactive: 2753240 kB' 'Active(anon): 128716 kB' 'Inactive(anon): 0 kB' 'Active(file): 369152 kB' 'Inactive(file): 2753240 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'AnonPages: 119788 kB' 'Mapped: 50764 kB' 'Shmem: 10488 kB' 'KReclaimable: 88268 kB' 'Slab: 191516 kB' 'SReclaimable: 88268 kB' 'SUnreclaim: 103248 kB' 'KernelStack: 6736 kB' 'PageTables: 4392 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13983868 kB' 'Committed_AS: 323252 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55480 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 206700 kB' 'DirectMap2M: 6084608 kB' 'DirectMap1G: 8388608 kB' 00:05:17.585 14:45:50 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.585 14:45:50 -- setup/common.sh@32 -- # continue 00:05:17.585 14:45:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.585 14:45:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.585 14:45:50 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.847 14:45:50 -- setup/common.sh@32 -- # continue 00:05:17.848 14:45:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.848 14:45:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.848 14:45:50 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.848 14:45:50 -- setup/common.sh@32 -- # continue 00:05:17.848 14:45:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.848 14:45:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.848 14:45:50 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.848 14:45:50 -- setup/common.sh@32 -- # continue 00:05:17.848 14:45:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.848 14:45:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.848 14:45:50 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.848 14:45:50 -- setup/common.sh@32 -- # continue 00:05:17.848 14:45:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.848 14:45:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.848 14:45:50 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.848 14:45:50 -- setup/common.sh@32 -- # continue 00:05:17.848 14:45:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.848 14:45:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.848 14:45:50 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.848 14:45:50 -- setup/common.sh@32 -- # continue 00:05:17.848 14:45:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.848 14:45:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.848 14:45:50 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.848 14:45:50 -- setup/common.sh@32 -- # continue 00:05:17.848 14:45:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.848 14:45:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.848 14:45:50 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.848 14:45:50 -- setup/common.sh@32 -- # continue 00:05:17.848 14:45:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.848 14:45:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.848 14:45:50 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.848 14:45:50 -- setup/common.sh@32 -- # continue 00:05:17.848 14:45:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.848 14:45:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.848 14:45:50 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.848 14:45:50 -- setup/common.sh@32 -- # continue 00:05:17.848 14:45:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.848 14:45:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.848 14:45:50 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.848 14:45:50 -- setup/common.sh@32 -- # continue 00:05:17.848 14:45:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.848 14:45:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.848 14:45:50 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.848 14:45:50 -- setup/common.sh@32 -- # continue 00:05:17.848 14:45:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.848 14:45:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.848 14:45:50 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.848 14:45:50 -- setup/common.sh@32 -- # continue 00:05:17.848 14:45:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.848 14:45:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.848 14:45:50 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.848 14:45:50 -- setup/common.sh@32 -- # continue 00:05:17.848 14:45:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.848 14:45:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.848 14:45:50 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.848 14:45:50 -- setup/common.sh@32 -- # continue 00:05:17.848 14:45:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.848 14:45:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.848 14:45:50 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.848 14:45:50 -- setup/common.sh@32 -- # continue 00:05:17.848 14:45:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.848 14:45:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.848 14:45:50 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.848 14:45:50 -- setup/common.sh@32 -- # continue 00:05:17.848 14:45:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.848 14:45:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.848 14:45:50 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.848 14:45:50 -- setup/common.sh@32 -- # continue 00:05:17.848 14:45:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.848 14:45:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.848 14:45:50 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.848 14:45:50 -- setup/common.sh@32 -- # continue 00:05:17.848 14:45:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.848 14:45:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.848 14:45:50 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.848 14:45:50 -- setup/common.sh@32 -- # continue 00:05:17.848 14:45:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.848 14:45:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.848 14:45:50 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.848 14:45:50 -- setup/common.sh@32 -- # continue 00:05:17.848 14:45:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.848 14:45:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.848 14:45:50 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.848 14:45:50 -- setup/common.sh@32 -- # continue 00:05:17.848 14:45:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.848 14:45:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.848 14:45:50 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.848 14:45:50 -- setup/common.sh@32 -- # continue 00:05:17.848 14:45:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.848 14:45:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.848 14:45:50 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.848 14:45:50 -- setup/common.sh@32 -- # continue 00:05:17.848 14:45:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.848 14:45:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.848 14:45:50 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.848 14:45:50 -- setup/common.sh@32 -- # continue 00:05:17.848 14:45:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.848 14:45:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.848 14:45:50 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.848 14:45:50 -- setup/common.sh@32 -- # continue 00:05:17.848 14:45:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.848 14:45:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.848 14:45:50 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.848 14:45:50 -- setup/common.sh@32 -- # continue 00:05:17.848 14:45:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.848 14:45:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.848 14:45:50 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.848 14:45:50 -- setup/common.sh@32 -- # continue 00:05:17.848 14:45:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.848 14:45:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.848 14:45:50 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.848 14:45:50 -- setup/common.sh@32 -- # continue 00:05:17.848 14:45:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.848 14:45:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.848 14:45:50 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.848 14:45:50 -- setup/common.sh@32 -- # continue 00:05:17.848 14:45:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.848 14:45:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.848 14:45:50 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.848 14:45:50 -- setup/common.sh@32 -- # continue 00:05:17.848 14:45:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.848 14:45:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.848 14:45:50 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.848 14:45:50 -- setup/common.sh@32 -- # continue 00:05:17.848 14:45:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.848 14:45:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.848 14:45:50 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.848 14:45:50 -- setup/common.sh@32 -- # continue 00:05:17.848 14:45:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.848 14:45:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.848 14:45:50 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.848 14:45:50 -- setup/common.sh@32 -- # continue 00:05:17.848 14:45:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.848 14:45:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.848 14:45:50 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.848 14:45:50 -- setup/common.sh@32 -- # continue 00:05:17.848 14:45:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.848 14:45:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.848 14:45:50 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.848 14:45:50 -- setup/common.sh@32 -- # continue 00:05:17.848 14:45:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.848 14:45:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.848 14:45:50 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.848 14:45:50 -- setup/common.sh@32 -- # continue 00:05:17.848 14:45:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.848 14:45:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.848 14:45:50 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.848 14:45:50 -- setup/common.sh@32 -- # continue 00:05:17.848 14:45:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.849 14:45:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.849 14:45:50 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.849 14:45:50 -- setup/common.sh@32 -- # continue 00:05:17.849 14:45:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.849 14:45:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.849 14:45:50 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.849 14:45:50 -- setup/common.sh@32 -- # continue 00:05:17.849 14:45:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.849 14:45:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.849 14:45:50 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.849 14:45:50 -- setup/common.sh@32 -- # continue 00:05:17.849 14:45:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.849 14:45:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.849 14:45:50 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.849 14:45:50 -- setup/common.sh@32 -- # continue 00:05:17.849 14:45:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.849 14:45:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.849 14:45:50 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.849 14:45:50 -- setup/common.sh@32 -- # continue 00:05:17.849 14:45:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.849 14:45:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.849 14:45:50 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.849 14:45:50 -- setup/common.sh@32 -- # continue 00:05:17.849 14:45:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.849 14:45:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.849 14:45:50 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.849 14:45:50 -- setup/common.sh@32 -- # continue 00:05:17.849 14:45:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.849 14:45:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.849 14:45:50 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.849 14:45:50 -- setup/common.sh@32 -- # continue 00:05:17.849 14:45:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.849 14:45:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.849 14:45:50 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.849 14:45:50 -- setup/common.sh@32 -- # continue 00:05:17.849 14:45:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.849 14:45:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.849 14:45:50 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.849 14:45:50 -- setup/common.sh@32 -- # continue 00:05:17.849 14:45:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.849 14:45:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.849 14:45:50 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.849 14:45:50 -- setup/common.sh@32 -- # continue 00:05:17.849 14:45:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.849 14:45:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.849 14:45:50 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.849 14:45:50 -- setup/common.sh@32 -- # continue 00:05:17.849 14:45:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.849 14:45:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.849 14:45:50 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.849 14:45:50 -- setup/common.sh@33 -- # echo 0 00:05:17.849 14:45:50 -- setup/common.sh@33 -- # return 0 00:05:17.849 14:45:50 -- setup/hugepages.sh@99 -- # surp=0 00:05:17.849 14:45:50 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:05:17.849 14:45:50 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:05:17.849 14:45:50 -- setup/common.sh@18 -- # local node= 00:05:17.849 14:45:50 -- setup/common.sh@19 -- # local var val 00:05:17.849 14:45:50 -- setup/common.sh@20 -- # local mem_f mem 00:05:17.849 14:45:50 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:17.849 14:45:50 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:17.849 14:45:50 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:17.849 14:45:50 -- setup/common.sh@28 -- # mapfile -t mem 00:05:17.849 14:45:50 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:17.849 14:45:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.849 14:45:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.849 14:45:50 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239108 kB' 'MemFree: 7572248 kB' 'MemAvailable: 10501828 kB' 'Buffers: 2684 kB' 'Cached: 3130196 kB' 'SwapCached: 0 kB' 'Active: 497752 kB' 'Inactive: 2753240 kB' 'Active(anon): 128600 kB' 'Inactive(anon): 0 kB' 'Active(file): 369152 kB' 'Inactive(file): 2753240 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'AnonPages: 119692 kB' 'Mapped: 50764 kB' 'Shmem: 10488 kB' 'KReclaimable: 88268 kB' 'Slab: 191524 kB' 'SReclaimable: 88268 kB' 'SUnreclaim: 103256 kB' 'KernelStack: 6768 kB' 'PageTables: 4496 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13983868 kB' 'Committed_AS: 323252 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55480 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 206700 kB' 'DirectMap2M: 6084608 kB' 'DirectMap1G: 8388608 kB' 00:05:17.849 14:45:50 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.849 14:45:50 -- setup/common.sh@32 -- # continue 00:05:17.849 14:45:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.849 14:45:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.849 14:45:50 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.849 14:45:50 -- setup/common.sh@32 -- # continue 00:05:17.849 14:45:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.849 14:45:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.849 14:45:50 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.849 14:45:50 -- setup/common.sh@32 -- # continue 00:05:17.849 14:45:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.849 14:45:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.849 14:45:50 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.849 14:45:50 -- setup/common.sh@32 -- # continue 00:05:17.849 14:45:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.849 14:45:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.849 14:45:50 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.849 14:45:50 -- setup/common.sh@32 -- # continue 00:05:17.849 14:45:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.849 14:45:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.849 14:45:50 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.849 14:45:50 -- setup/common.sh@32 -- # continue 00:05:17.849 14:45:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.849 14:45:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.849 14:45:50 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.849 14:45:50 -- setup/common.sh@32 -- # continue 00:05:17.849 14:45:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.849 14:45:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.849 14:45:50 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.849 14:45:50 -- setup/common.sh@32 -- # continue 00:05:17.849 14:45:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.849 14:45:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.849 14:45:50 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.849 14:45:50 -- setup/common.sh@32 -- # continue 00:05:17.849 14:45:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.849 14:45:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.849 14:45:50 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.849 14:45:50 -- setup/common.sh@32 -- # continue 00:05:17.849 14:45:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.849 14:45:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.849 14:45:50 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.849 14:45:50 -- setup/common.sh@32 -- # continue 00:05:17.849 14:45:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.849 14:45:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.849 14:45:50 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.849 14:45:50 -- setup/common.sh@32 -- # continue 00:05:17.849 14:45:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.849 14:45:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.849 14:45:50 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.849 14:45:50 -- setup/common.sh@32 -- # continue 00:05:17.849 14:45:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.849 14:45:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.849 14:45:50 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.849 14:45:50 -- setup/common.sh@32 -- # continue 00:05:17.849 14:45:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.849 14:45:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.849 14:45:50 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.849 14:45:50 -- setup/common.sh@32 -- # continue 00:05:17.849 14:45:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.849 14:45:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.849 14:45:50 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.849 14:45:50 -- setup/common.sh@32 -- # continue 00:05:17.849 14:45:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.849 14:45:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.849 14:45:50 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.849 14:45:50 -- setup/common.sh@32 -- # continue 00:05:17.849 14:45:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.849 14:45:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.849 14:45:50 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.849 14:45:50 -- setup/common.sh@32 -- # continue 00:05:17.849 14:45:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.849 14:45:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.849 14:45:50 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.850 14:45:50 -- setup/common.sh@32 -- # continue 00:05:17.850 14:45:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.850 14:45:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.850 14:45:50 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.850 14:45:50 -- setup/common.sh@32 -- # continue 00:05:17.850 14:45:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.850 14:45:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.850 14:45:50 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.850 14:45:50 -- setup/common.sh@32 -- # continue 00:05:17.850 14:45:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.850 14:45:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.850 14:45:50 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.850 14:45:50 -- setup/common.sh@32 -- # continue 00:05:17.850 14:45:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.850 14:45:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.850 14:45:50 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.850 14:45:50 -- setup/common.sh@32 -- # continue 00:05:17.850 14:45:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.850 14:45:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.850 14:45:50 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.850 14:45:50 -- setup/common.sh@32 -- # continue 00:05:17.850 14:45:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.850 14:45:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.850 14:45:50 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.850 14:45:50 -- setup/common.sh@32 -- # continue 00:05:17.850 14:45:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.850 14:45:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.850 14:45:50 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.850 14:45:50 -- setup/common.sh@32 -- # continue 00:05:17.850 14:45:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.850 14:45:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.850 14:45:50 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.850 14:45:50 -- setup/common.sh@32 -- # continue 00:05:17.850 14:45:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.850 14:45:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.850 14:45:50 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.850 14:45:50 -- setup/common.sh@32 -- # continue 00:05:17.850 14:45:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.850 14:45:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.850 14:45:50 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.850 14:45:50 -- setup/common.sh@32 -- # continue 00:05:17.850 14:45:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.850 14:45:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.850 14:45:50 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.850 14:45:50 -- setup/common.sh@32 -- # continue 00:05:17.850 14:45:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.850 14:45:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.850 14:45:50 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.850 14:45:50 -- setup/common.sh@32 -- # continue 00:05:17.850 14:45:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.850 14:45:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.850 14:45:50 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.850 14:45:50 -- setup/common.sh@32 -- # continue 00:05:17.850 14:45:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.850 14:45:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.850 14:45:50 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.850 14:45:50 -- setup/common.sh@32 -- # continue 00:05:17.850 14:45:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.850 14:45:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.850 14:45:50 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.850 14:45:50 -- setup/common.sh@32 -- # continue 00:05:17.850 14:45:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.850 14:45:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.850 14:45:50 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.850 14:45:50 -- setup/common.sh@32 -- # continue 00:05:17.850 14:45:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.850 14:45:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.850 14:45:50 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.850 14:45:50 -- setup/common.sh@32 -- # continue 00:05:17.850 14:45:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.850 14:45:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.850 14:45:50 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.850 14:45:50 -- setup/common.sh@32 -- # continue 00:05:17.850 14:45:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.850 14:45:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.850 14:45:50 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.850 14:45:50 -- setup/common.sh@32 -- # continue 00:05:17.850 14:45:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.850 14:45:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.850 14:45:50 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.850 14:45:50 -- setup/common.sh@32 -- # continue 00:05:17.850 14:45:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.850 14:45:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.850 14:45:50 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.850 14:45:50 -- setup/common.sh@32 -- # continue 00:05:17.850 14:45:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.850 14:45:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.850 14:45:50 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.850 14:45:50 -- setup/common.sh@32 -- # continue 00:05:17.850 14:45:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.850 14:45:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.850 14:45:50 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.850 14:45:50 -- setup/common.sh@32 -- # continue 00:05:17.850 14:45:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.850 14:45:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.850 14:45:50 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.850 14:45:50 -- setup/common.sh@32 -- # continue 00:05:17.850 14:45:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.850 14:45:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.850 14:45:50 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.850 14:45:50 -- setup/common.sh@32 -- # continue 00:05:17.850 14:45:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.850 14:45:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.850 14:45:50 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.850 14:45:50 -- setup/common.sh@32 -- # continue 00:05:17.850 14:45:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.850 14:45:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.850 14:45:50 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.850 14:45:50 -- setup/common.sh@32 -- # continue 00:05:17.850 14:45:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.850 14:45:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.850 14:45:50 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.850 14:45:50 -- setup/common.sh@32 -- # continue 00:05:17.850 14:45:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.850 14:45:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.850 14:45:50 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.850 14:45:50 -- setup/common.sh@32 -- # continue 00:05:17.850 14:45:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.850 14:45:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.850 14:45:50 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.850 14:45:50 -- setup/common.sh@32 -- # continue 00:05:17.850 14:45:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.850 14:45:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.850 14:45:50 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.850 14:45:50 -- setup/common.sh@32 -- # continue 00:05:17.850 14:45:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.850 14:45:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.850 14:45:50 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.850 14:45:50 -- setup/common.sh@33 -- # echo 0 00:05:17.850 14:45:50 -- setup/common.sh@33 -- # return 0 00:05:17.850 nr_hugepages=512 00:05:17.850 14:45:50 -- setup/hugepages.sh@100 -- # resv=0 00:05:17.850 14:45:50 -- setup/hugepages.sh@102 -- # echo nr_hugepages=512 00:05:17.850 resv_hugepages=0 00:05:17.850 surplus_hugepages=0 00:05:17.850 14:45:50 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:05:17.850 14:45:50 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:05:17.850 anon_hugepages=0 00:05:17.850 14:45:50 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:05:17.850 14:45:50 -- setup/hugepages.sh@107 -- # (( 512 == nr_hugepages + surp + resv )) 00:05:17.850 14:45:50 -- setup/hugepages.sh@109 -- # (( 512 == nr_hugepages )) 00:05:17.850 14:45:50 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:05:17.850 14:45:50 -- setup/common.sh@17 -- # local get=HugePages_Total 00:05:17.850 14:45:50 -- setup/common.sh@18 -- # local node= 00:05:17.850 14:45:50 -- setup/common.sh@19 -- # local var val 00:05:17.850 14:45:50 -- setup/common.sh@20 -- # local mem_f mem 00:05:17.850 14:45:50 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:17.850 14:45:50 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:17.850 14:45:50 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:17.850 14:45:50 -- setup/common.sh@28 -- # mapfile -t mem 00:05:17.850 14:45:50 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:17.850 14:45:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.851 14:45:50 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239108 kB' 'MemFree: 7572248 kB' 'MemAvailable: 10501828 kB' 'Buffers: 2684 kB' 'Cached: 3130196 kB' 'SwapCached: 0 kB' 'Active: 497900 kB' 'Inactive: 2753240 kB' 'Active(anon): 128748 kB' 'Inactive(anon): 0 kB' 'Active(file): 369152 kB' 'Inactive(file): 2753240 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'AnonPages: 119824 kB' 'Mapped: 50764 kB' 'Shmem: 10488 kB' 'KReclaimable: 88268 kB' 'Slab: 191516 kB' 'SReclaimable: 88268 kB' 'SUnreclaim: 103248 kB' 'KernelStack: 6752 kB' 'PageTables: 4444 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13983868 kB' 'Committed_AS: 323252 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55480 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 206700 kB' 'DirectMap2M: 6084608 kB' 'DirectMap1G: 8388608 kB' 00:05:17.851 14:45:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.851 14:45:50 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.851 14:45:50 -- setup/common.sh@32 -- # continue 00:05:17.851 14:45:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.851 14:45:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.851 14:45:50 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.851 14:45:50 -- setup/common.sh@32 -- # continue 00:05:17.851 14:45:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.851 14:45:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.851 14:45:50 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.851 14:45:50 -- setup/common.sh@32 -- # continue 00:05:17.851 14:45:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.851 14:45:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.851 14:45:50 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.851 14:45:50 -- setup/common.sh@32 -- # continue 00:05:17.851 14:45:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.851 14:45:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.851 14:45:50 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.851 14:45:50 -- setup/common.sh@32 -- # continue 00:05:17.851 14:45:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.851 14:45:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.851 14:45:50 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.851 14:45:50 -- setup/common.sh@32 -- # continue 00:05:17.851 14:45:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.851 14:45:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.851 14:45:50 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.851 14:45:50 -- setup/common.sh@32 -- # continue 00:05:17.851 14:45:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.851 14:45:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.851 14:45:50 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.851 14:45:50 -- setup/common.sh@32 -- # continue 00:05:17.851 14:45:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.851 14:45:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.851 14:45:50 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.851 14:45:50 -- setup/common.sh@32 -- # continue 00:05:17.851 14:45:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.851 14:45:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.851 14:45:50 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.851 14:45:50 -- setup/common.sh@32 -- # continue 00:05:17.851 14:45:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.851 14:45:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.851 14:45:50 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.851 14:45:50 -- setup/common.sh@32 -- # continue 00:05:17.851 14:45:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.851 14:45:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.851 14:45:50 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.851 14:45:50 -- setup/common.sh@32 -- # continue 00:05:17.851 14:45:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.851 14:45:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.851 14:45:50 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.851 14:45:50 -- setup/common.sh@32 -- # continue 00:05:17.851 14:45:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.851 14:45:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.851 14:45:50 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.851 14:45:50 -- setup/common.sh@32 -- # continue 00:05:17.851 14:45:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.851 14:45:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.851 14:45:50 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.851 14:45:50 -- setup/common.sh@32 -- # continue 00:05:17.851 14:45:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.851 14:45:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.851 14:45:50 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.851 14:45:50 -- setup/common.sh@32 -- # continue 00:05:17.851 14:45:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.851 14:45:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.851 14:45:50 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.851 14:45:50 -- setup/common.sh@32 -- # continue 00:05:17.851 14:45:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.851 14:45:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.851 14:45:50 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.851 14:45:50 -- setup/common.sh@32 -- # continue 00:05:17.851 14:45:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.851 14:45:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.851 14:45:50 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.851 14:45:50 -- setup/common.sh@32 -- # continue 00:05:17.851 14:45:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.851 14:45:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.851 14:45:50 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.851 14:45:50 -- setup/common.sh@32 -- # continue 00:05:17.851 14:45:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.851 14:45:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.851 14:45:50 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.851 14:45:50 -- setup/common.sh@32 -- # continue 00:05:17.851 14:45:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.851 14:45:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.851 14:45:50 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.851 14:45:50 -- setup/common.sh@32 -- # continue 00:05:17.851 14:45:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.851 14:45:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.851 14:45:50 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.851 14:45:50 -- setup/common.sh@32 -- # continue 00:05:17.851 14:45:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.851 14:45:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.851 14:45:50 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.851 14:45:50 -- setup/common.sh@32 -- # continue 00:05:17.851 14:45:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.851 14:45:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.851 14:45:50 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.851 14:45:50 -- setup/common.sh@32 -- # continue 00:05:17.851 14:45:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.851 14:45:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.851 14:45:50 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.851 14:45:50 -- setup/common.sh@32 -- # continue 00:05:17.851 14:45:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.851 14:45:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.851 14:45:50 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.851 14:45:50 -- setup/common.sh@32 -- # continue 00:05:17.851 14:45:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.851 14:45:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.851 14:45:50 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.851 14:45:50 -- setup/common.sh@32 -- # continue 00:05:17.851 14:45:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.851 14:45:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.851 14:45:50 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.851 14:45:50 -- setup/common.sh@32 -- # continue 00:05:17.851 14:45:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.851 14:45:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.851 14:45:50 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.851 14:45:50 -- setup/common.sh@32 -- # continue 00:05:17.851 14:45:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.851 14:45:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.851 14:45:50 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.851 14:45:50 -- setup/common.sh@32 -- # continue 00:05:17.851 14:45:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.851 14:45:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.851 14:45:50 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.851 14:45:50 -- setup/common.sh@32 -- # continue 00:05:17.851 14:45:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.851 14:45:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.851 14:45:50 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.851 14:45:50 -- setup/common.sh@32 -- # continue 00:05:17.851 14:45:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.851 14:45:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.851 14:45:50 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.851 14:45:50 -- setup/common.sh@32 -- # continue 00:05:17.851 14:45:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.851 14:45:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.851 14:45:50 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.851 14:45:50 -- setup/common.sh@32 -- # continue 00:05:17.851 14:45:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.851 14:45:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.851 14:45:50 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.851 14:45:50 -- setup/common.sh@32 -- # continue 00:05:17.851 14:45:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.851 14:45:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.851 14:45:50 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.851 14:45:50 -- setup/common.sh@32 -- # continue 00:05:17.851 14:45:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.851 14:45:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.851 14:45:50 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.851 14:45:50 -- setup/common.sh@32 -- # continue 00:05:17.851 14:45:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.852 14:45:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.852 14:45:50 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.852 14:45:50 -- setup/common.sh@32 -- # continue 00:05:17.852 14:45:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.852 14:45:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.852 14:45:50 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.852 14:45:50 -- setup/common.sh@32 -- # continue 00:05:17.852 14:45:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.852 14:45:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.852 14:45:50 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.852 14:45:50 -- setup/common.sh@32 -- # continue 00:05:17.852 14:45:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.852 14:45:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.852 14:45:50 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.852 14:45:50 -- setup/common.sh@32 -- # continue 00:05:17.852 14:45:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.852 14:45:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.852 14:45:50 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.852 14:45:50 -- setup/common.sh@32 -- # continue 00:05:17.852 14:45:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.852 14:45:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.852 14:45:50 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.852 14:45:50 -- setup/common.sh@32 -- # continue 00:05:17.852 14:45:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.852 14:45:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.852 14:45:50 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.852 14:45:50 -- setup/common.sh@32 -- # continue 00:05:17.852 14:45:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.852 14:45:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.852 14:45:50 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.852 14:45:50 -- setup/common.sh@32 -- # continue 00:05:17.852 14:45:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.852 14:45:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.852 14:45:50 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.852 14:45:50 -- setup/common.sh@32 -- # continue 00:05:17.852 14:45:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.852 14:45:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.852 14:45:50 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.852 14:45:50 -- setup/common.sh@32 -- # continue 00:05:17.852 14:45:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.852 14:45:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.852 14:45:50 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.852 14:45:50 -- setup/common.sh@33 -- # echo 512 00:05:17.852 14:45:50 -- setup/common.sh@33 -- # return 0 00:05:17.852 14:45:50 -- setup/hugepages.sh@110 -- # (( 512 == nr_hugepages + surp + resv )) 00:05:17.852 14:45:50 -- setup/hugepages.sh@112 -- # get_nodes 00:05:17.852 14:45:50 -- setup/hugepages.sh@27 -- # local node 00:05:17.852 14:45:50 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:17.852 14:45:50 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:05:17.852 14:45:50 -- setup/hugepages.sh@32 -- # no_nodes=1 00:05:17.852 14:45:50 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:05:17.852 14:45:50 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:05:17.852 14:45:50 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:05:17.852 14:45:50 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:05:17.852 14:45:50 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:17.852 14:45:50 -- setup/common.sh@18 -- # local node=0 00:05:17.852 14:45:50 -- setup/common.sh@19 -- # local var val 00:05:17.852 14:45:50 -- setup/common.sh@20 -- # local mem_f mem 00:05:17.852 14:45:50 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:17.852 14:45:50 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:05:17.852 14:45:50 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:05:17.852 14:45:50 -- setup/common.sh@28 -- # mapfile -t mem 00:05:17.852 14:45:50 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:17.852 14:45:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.852 14:45:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.852 14:45:50 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239108 kB' 'MemFree: 7572248 kB' 'MemUsed: 4666860 kB' 'SwapCached: 0 kB' 'Active: 497684 kB' 'Inactive: 2753240 kB' 'Active(anon): 128532 kB' 'Inactive(anon): 0 kB' 'Active(file): 369152 kB' 'Inactive(file): 2753240 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'FilePages: 3132880 kB' 'Mapped: 50764 kB' 'AnonPages: 119636 kB' 'Shmem: 10488 kB' 'KernelStack: 6752 kB' 'PageTables: 4444 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 88268 kB' 'Slab: 191516 kB' 'SReclaimable: 88268 kB' 'SUnreclaim: 103248 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:05:17.852 14:45:50 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.852 14:45:50 -- setup/common.sh@32 -- # continue 00:05:17.852 14:45:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.852 14:45:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.852 14:45:50 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.852 14:45:50 -- setup/common.sh@32 -- # continue 00:05:17.852 14:45:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.852 14:45:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.852 14:45:50 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.852 14:45:50 -- setup/common.sh@32 -- # continue 00:05:17.852 14:45:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.852 14:45:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.852 14:45:50 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.852 14:45:50 -- setup/common.sh@32 -- # continue 00:05:17.852 14:45:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.852 14:45:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.852 14:45:50 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.852 14:45:50 -- setup/common.sh@32 -- # continue 00:05:17.852 14:45:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.852 14:45:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.852 14:45:50 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.852 14:45:50 -- setup/common.sh@32 -- # continue 00:05:17.852 14:45:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.852 14:45:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.852 14:45:50 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.852 14:45:50 -- setup/common.sh@32 -- # continue 00:05:17.852 14:45:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.852 14:45:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.852 14:45:50 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.852 14:45:50 -- setup/common.sh@32 -- # continue 00:05:17.852 14:45:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.852 14:45:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.852 14:45:50 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.852 14:45:50 -- setup/common.sh@32 -- # continue 00:05:17.852 14:45:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.852 14:45:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.852 14:45:50 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.852 14:45:50 -- setup/common.sh@32 -- # continue 00:05:17.852 14:45:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.852 14:45:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.852 14:45:50 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.852 14:45:50 -- setup/common.sh@32 -- # continue 00:05:17.852 14:45:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.852 14:45:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.852 14:45:50 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.852 14:45:50 -- setup/common.sh@32 -- # continue 00:05:17.852 14:45:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.852 14:45:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.852 14:45:50 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.852 14:45:50 -- setup/common.sh@32 -- # continue 00:05:17.852 14:45:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.852 14:45:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.852 14:45:50 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.852 14:45:50 -- setup/common.sh@32 -- # continue 00:05:17.852 14:45:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.852 14:45:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.852 14:45:50 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.852 14:45:50 -- setup/common.sh@32 -- # continue 00:05:17.853 14:45:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.853 14:45:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.853 14:45:50 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.853 14:45:50 -- setup/common.sh@32 -- # continue 00:05:17.853 14:45:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.853 14:45:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.853 14:45:50 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.853 14:45:50 -- setup/common.sh@32 -- # continue 00:05:17.853 14:45:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.853 14:45:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.853 14:45:50 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.853 14:45:50 -- setup/common.sh@32 -- # continue 00:05:17.853 14:45:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.853 14:45:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.853 14:45:50 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.853 14:45:50 -- setup/common.sh@32 -- # continue 00:05:17.853 14:45:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.853 14:45:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.853 14:45:50 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.853 14:45:50 -- setup/common.sh@32 -- # continue 00:05:17.853 14:45:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.853 14:45:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.853 14:45:50 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.853 14:45:50 -- setup/common.sh@32 -- # continue 00:05:17.853 14:45:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.853 14:45:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.853 14:45:50 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.853 14:45:50 -- setup/common.sh@32 -- # continue 00:05:17.853 14:45:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.853 14:45:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.853 14:45:50 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.853 14:45:50 -- setup/common.sh@32 -- # continue 00:05:17.853 14:45:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.853 14:45:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.853 14:45:50 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.853 14:45:50 -- setup/common.sh@32 -- # continue 00:05:17.853 14:45:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.853 14:45:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.853 14:45:50 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.853 14:45:50 -- setup/common.sh@32 -- # continue 00:05:17.853 14:45:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.853 14:45:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.853 14:45:50 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.853 14:45:50 -- setup/common.sh@32 -- # continue 00:05:17.853 14:45:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.853 14:45:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.853 14:45:50 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.853 14:45:50 -- setup/common.sh@32 -- # continue 00:05:17.853 14:45:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.853 14:45:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.853 14:45:50 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.853 14:45:50 -- setup/common.sh@32 -- # continue 00:05:17.853 14:45:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.853 14:45:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.853 14:45:50 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.853 14:45:50 -- setup/common.sh@32 -- # continue 00:05:17.853 14:45:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.853 14:45:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.853 14:45:50 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.853 14:45:50 -- setup/common.sh@32 -- # continue 00:05:17.853 14:45:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.853 14:45:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.853 14:45:50 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.853 14:45:50 -- setup/common.sh@32 -- # continue 00:05:17.853 14:45:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.853 14:45:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.853 14:45:50 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.853 14:45:50 -- setup/common.sh@32 -- # continue 00:05:17.853 14:45:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.853 14:45:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.853 14:45:50 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.853 14:45:50 -- setup/common.sh@32 -- # continue 00:05:17.853 14:45:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.853 14:45:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.853 14:45:50 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.853 14:45:50 -- setup/common.sh@32 -- # continue 00:05:17.853 14:45:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.853 14:45:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.853 14:45:50 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.853 14:45:50 -- setup/common.sh@32 -- # continue 00:05:17.853 14:45:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.853 14:45:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.853 14:45:50 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.853 14:45:50 -- setup/common.sh@32 -- # continue 00:05:17.853 14:45:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.853 14:45:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.853 14:45:50 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.853 14:45:50 -- setup/common.sh@33 -- # echo 0 00:05:17.853 14:45:50 -- setup/common.sh@33 -- # return 0 00:05:17.853 node0=512 expecting 512 00:05:17.853 ************************************ 00:05:17.853 END TEST custom_alloc 00:05:17.853 ************************************ 00:05:17.853 14:45:50 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:05:17.853 14:45:50 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:05:17.853 14:45:50 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:05:17.853 14:45:50 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:05:17.853 14:45:50 -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:05:17.853 14:45:50 -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:05:17.853 00:05:17.853 real 0m0.642s 00:05:17.853 user 0m0.295s 00:05:17.853 sys 0m0.347s 00:05:17.853 14:45:50 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:17.853 14:45:50 -- common/autotest_common.sh@10 -- # set +x 00:05:17.853 14:45:50 -- setup/hugepages.sh@215 -- # run_test no_shrink_alloc no_shrink_alloc 00:05:17.853 14:45:50 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:17.853 14:45:50 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:17.853 14:45:50 -- common/autotest_common.sh@10 -- # set +x 00:05:17.853 ************************************ 00:05:17.853 START TEST no_shrink_alloc 00:05:17.853 ************************************ 00:05:17.853 14:45:50 -- common/autotest_common.sh@1114 -- # no_shrink_alloc 00:05:17.853 14:45:50 -- setup/hugepages.sh@195 -- # get_test_nr_hugepages 2097152 0 00:05:17.853 14:45:50 -- setup/hugepages.sh@49 -- # local size=2097152 00:05:17.853 14:45:50 -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:05:17.853 14:45:50 -- setup/hugepages.sh@51 -- # shift 00:05:17.853 14:45:50 -- setup/hugepages.sh@52 -- # node_ids=('0') 00:05:17.853 14:45:50 -- setup/hugepages.sh@52 -- # local node_ids 00:05:17.853 14:45:50 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:05:17.853 14:45:50 -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:05:17.853 14:45:50 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:05:17.853 14:45:50 -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:05:17.853 14:45:50 -- setup/hugepages.sh@62 -- # local user_nodes 00:05:17.853 14:45:50 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:05:17.853 14:45:50 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:05:17.853 14:45:50 -- setup/hugepages.sh@67 -- # nodes_test=() 00:05:17.853 14:45:50 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:05:17.853 14:45:50 -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:05:17.853 14:45:50 -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:05:17.853 14:45:50 -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:05:17.853 14:45:50 -- setup/hugepages.sh@73 -- # return 0 00:05:17.853 14:45:50 -- setup/hugepages.sh@198 -- # setup output 00:05:17.853 14:45:50 -- setup/common.sh@9 -- # [[ output == output ]] 00:05:17.853 14:45:50 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:18.425 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:18.425 0000:00:06.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:18.425 0000:00:07.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:18.425 14:45:51 -- setup/hugepages.sh@199 -- # verify_nr_hugepages 00:05:18.425 14:45:51 -- setup/hugepages.sh@89 -- # local node 00:05:18.425 14:45:51 -- setup/hugepages.sh@90 -- # local sorted_t 00:05:18.425 14:45:51 -- setup/hugepages.sh@91 -- # local sorted_s 00:05:18.425 14:45:51 -- setup/hugepages.sh@92 -- # local surp 00:05:18.425 14:45:51 -- setup/hugepages.sh@93 -- # local resv 00:05:18.425 14:45:51 -- setup/hugepages.sh@94 -- # local anon 00:05:18.425 14:45:51 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:05:18.425 14:45:51 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:05:18.425 14:45:51 -- setup/common.sh@17 -- # local get=AnonHugePages 00:05:18.425 14:45:51 -- setup/common.sh@18 -- # local node= 00:05:18.425 14:45:51 -- setup/common.sh@19 -- # local var val 00:05:18.425 14:45:51 -- setup/common.sh@20 -- # local mem_f mem 00:05:18.425 14:45:51 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:18.425 14:45:51 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:18.425 14:45:51 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:18.425 14:45:51 -- setup/common.sh@28 -- # mapfile -t mem 00:05:18.425 14:45:51 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:18.425 14:45:51 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.425 14:45:51 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.426 14:45:51 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239108 kB' 'MemFree: 6529088 kB' 'MemAvailable: 9458676 kB' 'Buffers: 2684 kB' 'Cached: 3130196 kB' 'SwapCached: 0 kB' 'Active: 498004 kB' 'Inactive: 2753240 kB' 'Active(anon): 128852 kB' 'Inactive(anon): 0 kB' 'Active(file): 369152 kB' 'Inactive(file): 2753240 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'AnonPages: 119992 kB' 'Mapped: 50904 kB' 'Shmem: 10488 kB' 'KReclaimable: 88284 kB' 'Slab: 191508 kB' 'SReclaimable: 88284 kB' 'SUnreclaim: 103224 kB' 'KernelStack: 6808 kB' 'PageTables: 4496 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459580 kB' 'Committed_AS: 327936 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55528 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 206700 kB' 'DirectMap2M: 6084608 kB' 'DirectMap1G: 8388608 kB' 00:05:18.426 14:45:51 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:18.426 14:45:51 -- setup/common.sh@32 -- # continue 00:05:18.426 14:45:51 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.426 14:45:51 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.426 14:45:51 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:18.426 14:45:51 -- setup/common.sh@32 -- # continue 00:05:18.426 14:45:51 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.426 14:45:51 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.426 14:45:51 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:18.426 14:45:51 -- setup/common.sh@32 -- # continue 00:05:18.426 14:45:51 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.426 14:45:51 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.426 14:45:51 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:18.426 14:45:51 -- setup/common.sh@32 -- # continue 00:05:18.426 14:45:51 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.426 14:45:51 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.426 14:45:51 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:18.426 14:45:51 -- setup/common.sh@32 -- # continue 00:05:18.426 14:45:51 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.426 14:45:51 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.426 14:45:51 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:18.426 14:45:51 -- setup/common.sh@32 -- # continue 00:05:18.426 14:45:51 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.426 14:45:51 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.426 14:45:51 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:18.426 14:45:51 -- setup/common.sh@32 -- # continue 00:05:18.426 14:45:51 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.426 14:45:51 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.426 14:45:51 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:18.426 14:45:51 -- setup/common.sh@32 -- # continue 00:05:18.426 14:45:51 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.426 14:45:51 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.426 14:45:51 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:18.426 14:45:51 -- setup/common.sh@32 -- # continue 00:05:18.426 14:45:51 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.426 14:45:51 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.426 14:45:51 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:18.426 14:45:51 -- setup/common.sh@32 -- # continue 00:05:18.426 14:45:51 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.426 14:45:51 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.426 14:45:51 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:18.426 14:45:51 -- setup/common.sh@32 -- # continue 00:05:18.426 14:45:51 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.426 14:45:51 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.426 14:45:51 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:18.426 14:45:51 -- setup/common.sh@32 -- # continue 00:05:18.426 14:45:51 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.426 14:45:51 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.426 14:45:51 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:18.426 14:45:51 -- setup/common.sh@32 -- # continue 00:05:18.426 14:45:51 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.426 14:45:51 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.426 14:45:51 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:18.426 14:45:51 -- setup/common.sh@32 -- # continue 00:05:18.426 14:45:51 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.426 14:45:51 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.426 14:45:51 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:18.426 14:45:51 -- setup/common.sh@32 -- # continue 00:05:18.426 14:45:51 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.426 14:45:51 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.426 14:45:51 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:18.426 14:45:51 -- setup/common.sh@32 -- # continue 00:05:18.426 14:45:51 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.426 14:45:51 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.426 14:45:51 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:18.426 14:45:51 -- setup/common.sh@32 -- # continue 00:05:18.426 14:45:51 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.426 14:45:51 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.426 14:45:51 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:18.426 14:45:51 -- setup/common.sh@32 -- # continue 00:05:18.426 14:45:51 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.426 14:45:51 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.426 14:45:51 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:18.426 14:45:51 -- setup/common.sh@32 -- # continue 00:05:18.426 14:45:51 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.426 14:45:51 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.426 14:45:51 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:18.426 14:45:51 -- setup/common.sh@32 -- # continue 00:05:18.426 14:45:51 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.426 14:45:51 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.426 14:45:51 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:18.426 14:45:51 -- setup/common.sh@32 -- # continue 00:05:18.426 14:45:51 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.426 14:45:51 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.426 14:45:51 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:18.426 14:45:51 -- setup/common.sh@32 -- # continue 00:05:18.426 14:45:51 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.426 14:45:51 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.426 14:45:51 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:18.426 14:45:51 -- setup/common.sh@32 -- # continue 00:05:18.426 14:45:51 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.426 14:45:51 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.426 14:45:51 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:18.426 14:45:51 -- setup/common.sh@32 -- # continue 00:05:18.426 14:45:51 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.426 14:45:51 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.426 14:45:51 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:18.426 14:45:51 -- setup/common.sh@32 -- # continue 00:05:18.426 14:45:51 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.426 14:45:51 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.426 14:45:51 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:18.426 14:45:51 -- setup/common.sh@32 -- # continue 00:05:18.426 14:45:51 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.426 14:45:51 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.426 14:45:51 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:18.426 14:45:51 -- setup/common.sh@32 -- # continue 00:05:18.426 14:45:51 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.426 14:45:51 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.426 14:45:51 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:18.426 14:45:51 -- setup/common.sh@32 -- # continue 00:05:18.426 14:45:51 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.426 14:45:51 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.426 14:45:51 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:18.426 14:45:51 -- setup/common.sh@32 -- # continue 00:05:18.426 14:45:51 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.426 14:45:51 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.426 14:45:51 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:18.426 14:45:51 -- setup/common.sh@32 -- # continue 00:05:18.426 14:45:51 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.426 14:45:51 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.426 14:45:51 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:18.426 14:45:51 -- setup/common.sh@32 -- # continue 00:05:18.426 14:45:51 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.426 14:45:51 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.426 14:45:51 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:18.426 14:45:51 -- setup/common.sh@32 -- # continue 00:05:18.426 14:45:51 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.426 14:45:51 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.426 14:45:51 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:18.426 14:45:51 -- setup/common.sh@32 -- # continue 00:05:18.426 14:45:51 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.426 14:45:51 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.426 14:45:51 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:18.426 14:45:51 -- setup/common.sh@32 -- # continue 00:05:18.426 14:45:51 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.426 14:45:51 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.426 14:45:51 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:18.426 14:45:51 -- setup/common.sh@32 -- # continue 00:05:18.426 14:45:51 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.426 14:45:51 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.426 14:45:51 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:18.426 14:45:51 -- setup/common.sh@32 -- # continue 00:05:18.426 14:45:51 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.426 14:45:51 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.426 14:45:51 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:18.426 14:45:51 -- setup/common.sh@32 -- # continue 00:05:18.426 14:45:51 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.426 14:45:51 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.426 14:45:51 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:18.426 14:45:51 -- setup/common.sh@32 -- # continue 00:05:18.426 14:45:51 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.426 14:45:51 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.426 14:45:51 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:18.426 14:45:51 -- setup/common.sh@32 -- # continue 00:05:18.426 14:45:51 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.426 14:45:51 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.426 14:45:51 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:18.426 14:45:51 -- setup/common.sh@32 -- # continue 00:05:18.426 14:45:51 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.426 14:45:51 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.426 14:45:51 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:18.426 14:45:51 -- setup/common.sh@33 -- # echo 0 00:05:18.427 14:45:51 -- setup/common.sh@33 -- # return 0 00:05:18.427 14:45:51 -- setup/hugepages.sh@97 -- # anon=0 00:05:18.427 14:45:51 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:05:18.427 14:45:51 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:18.427 14:45:51 -- setup/common.sh@18 -- # local node= 00:05:18.427 14:45:51 -- setup/common.sh@19 -- # local var val 00:05:18.427 14:45:51 -- setup/common.sh@20 -- # local mem_f mem 00:05:18.427 14:45:51 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:18.427 14:45:51 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:18.427 14:45:51 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:18.427 14:45:51 -- setup/common.sh@28 -- # mapfile -t mem 00:05:18.427 14:45:51 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:18.427 14:45:51 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.427 14:45:51 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239108 kB' 'MemFree: 6529088 kB' 'MemAvailable: 9458676 kB' 'Buffers: 2684 kB' 'Cached: 3130200 kB' 'SwapCached: 0 kB' 'Active: 497460 kB' 'Inactive: 2753240 kB' 'Active(anon): 128308 kB' 'Inactive(anon): 0 kB' 'Active(file): 369152 kB' 'Inactive(file): 2753240 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'AnonPages: 119432 kB' 'Mapped: 50904 kB' 'Shmem: 10488 kB' 'KReclaimable: 88284 kB' 'Slab: 191500 kB' 'SReclaimable: 88284 kB' 'SUnreclaim: 103216 kB' 'KernelStack: 6712 kB' 'PageTables: 4196 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459580 kB' 'Committed_AS: 323084 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55464 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 206700 kB' 'DirectMap2M: 6084608 kB' 'DirectMap1G: 8388608 kB' 00:05:18.427 14:45:51 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.427 14:45:51 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.427 14:45:51 -- setup/common.sh@32 -- # continue 00:05:18.427 14:45:51 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.427 14:45:51 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.427 14:45:51 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.427 14:45:51 -- setup/common.sh@32 -- # continue 00:05:18.427 14:45:51 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.427 14:45:51 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.427 14:45:51 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.427 14:45:51 -- setup/common.sh@32 -- # continue 00:05:18.427 14:45:51 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.427 14:45:51 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.427 14:45:51 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.427 14:45:51 -- setup/common.sh@32 -- # continue 00:05:18.427 14:45:51 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.427 14:45:51 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.427 14:45:51 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.427 14:45:51 -- setup/common.sh@32 -- # continue 00:05:18.427 14:45:51 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.427 14:45:51 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.427 14:45:51 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.427 14:45:51 -- setup/common.sh@32 -- # continue 00:05:18.427 14:45:51 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.427 14:45:51 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.427 14:45:51 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.427 14:45:51 -- setup/common.sh@32 -- # continue 00:05:18.427 14:45:51 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.427 14:45:51 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.427 14:45:51 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.427 14:45:51 -- setup/common.sh@32 -- # continue 00:05:18.427 14:45:51 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.427 14:45:51 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.427 14:45:51 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.427 14:45:51 -- setup/common.sh@32 -- # continue 00:05:18.427 14:45:51 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.427 14:45:51 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.427 14:45:51 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.427 14:45:51 -- setup/common.sh@32 -- # continue 00:05:18.427 14:45:51 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.427 14:45:51 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.427 14:45:51 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.427 14:45:51 -- setup/common.sh@32 -- # continue 00:05:18.427 14:45:51 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.427 14:45:51 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.427 14:45:51 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.427 14:45:51 -- setup/common.sh@32 -- # continue 00:05:18.427 14:45:51 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.427 14:45:51 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.427 14:45:51 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.427 14:45:51 -- setup/common.sh@32 -- # continue 00:05:18.427 14:45:51 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.427 14:45:51 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.427 14:45:51 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.427 14:45:51 -- setup/common.sh@32 -- # continue 00:05:18.427 14:45:51 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.427 14:45:51 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.427 14:45:51 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.427 14:45:51 -- setup/common.sh@32 -- # continue 00:05:18.427 14:45:51 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.427 14:45:51 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.427 14:45:51 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.427 14:45:51 -- setup/common.sh@32 -- # continue 00:05:18.427 14:45:51 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.427 14:45:51 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.427 14:45:51 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.427 14:45:51 -- setup/common.sh@32 -- # continue 00:05:18.427 14:45:51 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.427 14:45:51 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.427 14:45:51 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.427 14:45:51 -- setup/common.sh@32 -- # continue 00:05:18.427 14:45:51 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.427 14:45:51 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.427 14:45:51 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.427 14:45:51 -- setup/common.sh@32 -- # continue 00:05:18.427 14:45:51 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.427 14:45:51 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.427 14:45:51 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.427 14:45:51 -- setup/common.sh@32 -- # continue 00:05:18.427 14:45:51 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.427 14:45:51 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.427 14:45:51 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.427 14:45:51 -- setup/common.sh@32 -- # continue 00:05:18.427 14:45:51 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.427 14:45:51 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.427 14:45:51 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.427 14:45:51 -- setup/common.sh@32 -- # continue 00:05:18.427 14:45:51 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.427 14:45:51 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.427 14:45:51 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.427 14:45:51 -- setup/common.sh@32 -- # continue 00:05:18.427 14:45:51 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.427 14:45:51 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.427 14:45:51 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.427 14:45:51 -- setup/common.sh@32 -- # continue 00:05:18.427 14:45:51 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.427 14:45:51 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.427 14:45:51 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.427 14:45:51 -- setup/common.sh@32 -- # continue 00:05:18.427 14:45:51 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.427 14:45:51 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.427 14:45:51 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.427 14:45:51 -- setup/common.sh@32 -- # continue 00:05:18.427 14:45:51 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.427 14:45:51 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.427 14:45:51 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.427 14:45:51 -- setup/common.sh@32 -- # continue 00:05:18.427 14:45:51 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.427 14:45:51 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.427 14:45:51 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.427 14:45:51 -- setup/common.sh@32 -- # continue 00:05:18.427 14:45:51 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.427 14:45:51 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.427 14:45:51 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.427 14:45:51 -- setup/common.sh@32 -- # continue 00:05:18.427 14:45:51 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.427 14:45:51 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.427 14:45:51 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.427 14:45:51 -- setup/common.sh@32 -- # continue 00:05:18.427 14:45:51 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.427 14:45:51 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.427 14:45:51 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.427 14:45:51 -- setup/common.sh@32 -- # continue 00:05:18.427 14:45:51 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.427 14:45:51 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.427 14:45:51 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.427 14:45:51 -- setup/common.sh@32 -- # continue 00:05:18.427 14:45:51 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.427 14:45:51 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.427 14:45:51 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.427 14:45:51 -- setup/common.sh@32 -- # continue 00:05:18.427 14:45:51 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.427 14:45:51 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.428 14:45:51 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.428 14:45:51 -- setup/common.sh@32 -- # continue 00:05:18.428 14:45:51 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.428 14:45:51 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.428 14:45:51 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.428 14:45:51 -- setup/common.sh@32 -- # continue 00:05:18.428 14:45:51 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.428 14:45:51 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.428 14:45:51 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.428 14:45:51 -- setup/common.sh@32 -- # continue 00:05:18.428 14:45:51 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.428 14:45:51 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.428 14:45:51 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.428 14:45:51 -- setup/common.sh@32 -- # continue 00:05:18.428 14:45:51 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.428 14:45:51 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.428 14:45:51 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.428 14:45:51 -- setup/common.sh@32 -- # continue 00:05:18.428 14:45:51 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.428 14:45:51 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.428 14:45:51 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.428 14:45:51 -- setup/common.sh@32 -- # continue 00:05:18.428 14:45:51 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.428 14:45:51 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.428 14:45:51 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.428 14:45:51 -- setup/common.sh@32 -- # continue 00:05:18.428 14:45:51 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.428 14:45:51 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.428 14:45:51 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.428 14:45:51 -- setup/common.sh@32 -- # continue 00:05:18.428 14:45:51 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.428 14:45:51 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.428 14:45:51 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.428 14:45:51 -- setup/common.sh@32 -- # continue 00:05:18.428 14:45:51 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.428 14:45:51 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.428 14:45:51 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.428 14:45:51 -- setup/common.sh@32 -- # continue 00:05:18.428 14:45:51 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.428 14:45:51 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.428 14:45:51 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.428 14:45:51 -- setup/common.sh@32 -- # continue 00:05:18.428 14:45:51 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.428 14:45:51 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.428 14:45:51 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.428 14:45:51 -- setup/common.sh@32 -- # continue 00:05:18.428 14:45:51 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.428 14:45:51 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.428 14:45:51 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.428 14:45:51 -- setup/common.sh@32 -- # continue 00:05:18.428 14:45:51 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.428 14:45:51 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.428 14:45:51 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.428 14:45:51 -- setup/common.sh@32 -- # continue 00:05:18.428 14:45:51 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.428 14:45:51 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.428 14:45:51 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.428 14:45:51 -- setup/common.sh@32 -- # continue 00:05:18.428 14:45:51 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.428 14:45:51 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.428 14:45:51 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.428 14:45:51 -- setup/common.sh@32 -- # continue 00:05:18.428 14:45:51 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.428 14:45:51 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.428 14:45:51 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.428 14:45:51 -- setup/common.sh@32 -- # continue 00:05:18.428 14:45:51 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.428 14:45:51 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.428 14:45:51 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.428 14:45:51 -- setup/common.sh@32 -- # continue 00:05:18.428 14:45:51 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.428 14:45:51 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.428 14:45:51 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.428 14:45:51 -- setup/common.sh@33 -- # echo 0 00:05:18.428 14:45:51 -- setup/common.sh@33 -- # return 0 00:05:18.428 14:45:51 -- setup/hugepages.sh@99 -- # surp=0 00:05:18.428 14:45:51 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:05:18.428 14:45:51 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:05:18.428 14:45:51 -- setup/common.sh@18 -- # local node= 00:05:18.428 14:45:51 -- setup/common.sh@19 -- # local var val 00:05:18.428 14:45:51 -- setup/common.sh@20 -- # local mem_f mem 00:05:18.428 14:45:51 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:18.428 14:45:51 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:18.428 14:45:51 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:18.428 14:45:51 -- setup/common.sh@28 -- # mapfile -t mem 00:05:18.428 14:45:51 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:18.428 14:45:51 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.428 14:45:51 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.428 14:45:51 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239108 kB' 'MemFree: 6529088 kB' 'MemAvailable: 9458676 kB' 'Buffers: 2684 kB' 'Cached: 3130200 kB' 'SwapCached: 0 kB' 'Active: 497440 kB' 'Inactive: 2753240 kB' 'Active(anon): 128288 kB' 'Inactive(anon): 0 kB' 'Active(file): 369152 kB' 'Inactive(file): 2753240 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'AnonPages: 119412 kB' 'Mapped: 50764 kB' 'Shmem: 10488 kB' 'KReclaimable: 88284 kB' 'Slab: 191480 kB' 'SReclaimable: 88284 kB' 'SUnreclaim: 103196 kB' 'KernelStack: 6752 kB' 'PageTables: 4436 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459580 kB' 'Committed_AS: 323452 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55464 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 206700 kB' 'DirectMap2M: 6084608 kB' 'DirectMap1G: 8388608 kB' 00:05:18.428 14:45:51 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:18.428 14:45:51 -- setup/common.sh@32 -- # continue 00:05:18.428 14:45:51 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.428 14:45:51 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.428 14:45:51 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:18.428 14:45:51 -- setup/common.sh@32 -- # continue 00:05:18.428 14:45:51 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.428 14:45:51 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.428 14:45:51 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:18.428 14:45:51 -- setup/common.sh@32 -- # continue 00:05:18.428 14:45:51 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.428 14:45:51 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.428 14:45:51 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:18.428 14:45:51 -- setup/common.sh@32 -- # continue 00:05:18.428 14:45:51 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.428 14:45:51 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.428 14:45:51 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:18.428 14:45:51 -- setup/common.sh@32 -- # continue 00:05:18.428 14:45:51 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.428 14:45:51 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.428 14:45:51 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:18.428 14:45:51 -- setup/common.sh@32 -- # continue 00:05:18.428 14:45:51 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.428 14:45:51 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.428 14:45:51 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:18.428 14:45:51 -- setup/common.sh@32 -- # continue 00:05:18.428 14:45:51 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.428 14:45:51 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.428 14:45:51 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:18.428 14:45:51 -- setup/common.sh@32 -- # continue 00:05:18.428 14:45:51 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.428 14:45:51 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.428 14:45:51 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:18.428 14:45:51 -- setup/common.sh@32 -- # continue 00:05:18.428 14:45:51 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.428 14:45:51 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.428 14:45:51 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:18.428 14:45:51 -- setup/common.sh@32 -- # continue 00:05:18.428 14:45:51 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.428 14:45:51 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.428 14:45:51 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:18.428 14:45:51 -- setup/common.sh@32 -- # continue 00:05:18.428 14:45:51 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.428 14:45:51 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.428 14:45:51 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:18.428 14:45:51 -- setup/common.sh@32 -- # continue 00:05:18.428 14:45:51 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.428 14:45:51 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.428 14:45:51 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:18.428 14:45:51 -- setup/common.sh@32 -- # continue 00:05:18.428 14:45:51 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.428 14:45:51 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.428 14:45:51 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:18.428 14:45:51 -- setup/common.sh@32 -- # continue 00:05:18.428 14:45:51 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.428 14:45:51 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.428 14:45:51 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:18.428 14:45:51 -- setup/common.sh@32 -- # continue 00:05:18.428 14:45:51 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.429 14:45:51 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.429 14:45:51 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:18.429 14:45:51 -- setup/common.sh@32 -- # continue 00:05:18.429 14:45:51 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.429 14:45:51 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.429 14:45:51 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:18.429 14:45:51 -- setup/common.sh@32 -- # continue 00:05:18.429 14:45:51 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.429 14:45:51 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.429 14:45:51 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:18.429 14:45:51 -- setup/common.sh@32 -- # continue 00:05:18.429 14:45:51 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.429 14:45:51 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.429 14:45:51 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:18.429 14:45:51 -- setup/common.sh@32 -- # continue 00:05:18.429 14:45:51 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.429 14:45:51 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.429 14:45:51 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:18.429 14:45:51 -- setup/common.sh@32 -- # continue 00:05:18.429 14:45:51 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.429 14:45:51 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.429 14:45:51 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:18.429 14:45:51 -- setup/common.sh@32 -- # continue 00:05:18.429 14:45:51 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.429 14:45:51 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.429 14:45:51 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:18.429 14:45:51 -- setup/common.sh@32 -- # continue 00:05:18.429 14:45:51 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.429 14:45:51 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.429 14:45:51 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:18.429 14:45:51 -- setup/common.sh@32 -- # continue 00:05:18.429 14:45:51 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.429 14:45:51 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.429 14:45:51 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:18.429 14:45:51 -- setup/common.sh@32 -- # continue 00:05:18.429 14:45:51 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.429 14:45:51 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.429 14:45:51 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:18.429 14:45:51 -- setup/common.sh@32 -- # continue 00:05:18.429 14:45:51 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.429 14:45:51 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.429 14:45:51 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:18.429 14:45:51 -- setup/common.sh@32 -- # continue 00:05:18.429 14:45:51 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.429 14:45:51 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.429 14:45:51 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:18.429 14:45:51 -- setup/common.sh@32 -- # continue 00:05:18.429 14:45:51 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.429 14:45:51 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.429 14:45:51 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:18.429 14:45:51 -- setup/common.sh@32 -- # continue 00:05:18.429 14:45:51 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.429 14:45:51 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.429 14:45:51 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:18.429 14:45:51 -- setup/common.sh@32 -- # continue 00:05:18.429 14:45:51 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.429 14:45:51 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.429 14:45:51 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:18.429 14:45:51 -- setup/common.sh@32 -- # continue 00:05:18.429 14:45:51 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.429 14:45:51 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.429 14:45:51 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:18.429 14:45:51 -- setup/common.sh@32 -- # continue 00:05:18.429 14:45:51 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.429 14:45:51 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.429 14:45:51 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:18.429 14:45:51 -- setup/common.sh@32 -- # continue 00:05:18.429 14:45:51 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.429 14:45:51 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.429 14:45:51 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:18.429 14:45:51 -- setup/common.sh@32 -- # continue 00:05:18.429 14:45:51 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.429 14:45:51 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.429 14:45:51 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:18.429 14:45:51 -- setup/common.sh@32 -- # continue 00:05:18.429 14:45:51 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.429 14:45:51 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.429 14:45:51 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:18.429 14:45:51 -- setup/common.sh@32 -- # continue 00:05:18.429 14:45:51 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.429 14:45:51 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.429 14:45:51 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:18.429 14:45:51 -- setup/common.sh@32 -- # continue 00:05:18.429 14:45:51 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.429 14:45:51 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.429 14:45:51 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:18.429 14:45:51 -- setup/common.sh@32 -- # continue 00:05:18.429 14:45:51 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.429 14:45:51 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.429 14:45:51 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:18.429 14:45:51 -- setup/common.sh@32 -- # continue 00:05:18.429 14:45:51 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.429 14:45:51 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.429 14:45:51 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:18.429 14:45:51 -- setup/common.sh@32 -- # continue 00:05:18.429 14:45:51 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.429 14:45:51 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.429 14:45:51 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:18.429 14:45:51 -- setup/common.sh@32 -- # continue 00:05:18.429 14:45:51 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.429 14:45:51 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.429 14:45:51 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:18.429 14:45:51 -- setup/common.sh@32 -- # continue 00:05:18.429 14:45:51 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.429 14:45:51 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.429 14:45:51 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:18.429 14:45:51 -- setup/common.sh@32 -- # continue 00:05:18.429 14:45:51 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.429 14:45:51 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.429 14:45:51 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:18.429 14:45:51 -- setup/common.sh@32 -- # continue 00:05:18.429 14:45:51 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.429 14:45:51 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.429 14:45:51 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:18.429 14:45:51 -- setup/common.sh@32 -- # continue 00:05:18.429 14:45:51 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.429 14:45:51 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.429 14:45:51 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:18.429 14:45:51 -- setup/common.sh@32 -- # continue 00:05:18.429 14:45:51 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.429 14:45:51 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.429 14:45:51 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:18.429 14:45:51 -- setup/common.sh@32 -- # continue 00:05:18.429 14:45:51 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.429 14:45:51 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.429 14:45:51 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:18.429 14:45:51 -- setup/common.sh@32 -- # continue 00:05:18.429 14:45:51 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.429 14:45:51 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.429 14:45:51 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:18.429 14:45:51 -- setup/common.sh@32 -- # continue 00:05:18.429 14:45:51 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.429 14:45:51 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.429 14:45:51 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:18.429 14:45:51 -- setup/common.sh@32 -- # continue 00:05:18.429 14:45:51 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.429 14:45:51 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.429 14:45:51 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:18.429 14:45:51 -- setup/common.sh@32 -- # continue 00:05:18.429 14:45:51 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.429 14:45:51 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.429 14:45:51 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:18.429 14:45:51 -- setup/common.sh@33 -- # echo 0 00:05:18.429 14:45:51 -- setup/common.sh@33 -- # return 0 00:05:18.429 14:45:51 -- setup/hugepages.sh@100 -- # resv=0 00:05:18.429 14:45:51 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:05:18.429 nr_hugepages=1024 00:05:18.429 14:45:51 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:05:18.429 resv_hugepages=0 00:05:18.429 14:45:51 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:05:18.429 surplus_hugepages=0 00:05:18.429 anon_hugepages=0 00:05:18.429 14:45:51 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:05:18.429 14:45:51 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:18.429 14:45:51 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:05:18.429 14:45:51 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:05:18.429 14:45:51 -- setup/common.sh@17 -- # local get=HugePages_Total 00:05:18.429 14:45:51 -- setup/common.sh@18 -- # local node= 00:05:18.429 14:45:51 -- setup/common.sh@19 -- # local var val 00:05:18.429 14:45:51 -- setup/common.sh@20 -- # local mem_f mem 00:05:18.429 14:45:51 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:18.429 14:45:51 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:18.429 14:45:51 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:18.429 14:45:51 -- setup/common.sh@28 -- # mapfile -t mem 00:05:18.429 14:45:51 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:18.429 14:45:51 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.429 14:45:51 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.430 14:45:51 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239108 kB' 'MemFree: 6529088 kB' 'MemAvailable: 9458676 kB' 'Buffers: 2684 kB' 'Cached: 3130200 kB' 'SwapCached: 0 kB' 'Active: 497684 kB' 'Inactive: 2753240 kB' 'Active(anon): 128532 kB' 'Inactive(anon): 0 kB' 'Active(file): 369152 kB' 'Inactive(file): 2753240 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'AnonPages: 119664 kB' 'Mapped: 50764 kB' 'Shmem: 10488 kB' 'KReclaimable: 88284 kB' 'Slab: 191476 kB' 'SReclaimable: 88284 kB' 'SUnreclaim: 103192 kB' 'KernelStack: 6752 kB' 'PageTables: 4436 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459580 kB' 'Committed_AS: 323452 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55464 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 206700 kB' 'DirectMap2M: 6084608 kB' 'DirectMap1G: 8388608 kB' 00:05:18.430 14:45:51 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:18.430 14:45:51 -- setup/common.sh@32 -- # continue 00:05:18.430 14:45:51 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.430 14:45:51 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.430 14:45:51 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:18.430 14:45:51 -- setup/common.sh@32 -- # continue 00:05:18.430 14:45:51 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.430 14:45:51 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.430 14:45:51 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:18.430 14:45:51 -- setup/common.sh@32 -- # continue 00:05:18.430 14:45:51 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.430 14:45:51 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.430 14:45:51 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:18.430 14:45:51 -- setup/common.sh@32 -- # continue 00:05:18.430 14:45:51 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.430 14:45:51 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.430 14:45:51 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:18.430 14:45:51 -- setup/common.sh@32 -- # continue 00:05:18.430 14:45:51 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.430 14:45:51 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.430 14:45:51 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:18.430 14:45:51 -- setup/common.sh@32 -- # continue 00:05:18.430 14:45:51 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.430 14:45:51 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.430 14:45:51 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:18.430 14:45:51 -- setup/common.sh@32 -- # continue 00:05:18.430 14:45:51 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.430 14:45:51 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.430 14:45:51 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:18.430 14:45:51 -- setup/common.sh@32 -- # continue 00:05:18.430 14:45:51 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.430 14:45:51 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.430 14:45:51 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:18.430 14:45:51 -- setup/common.sh@32 -- # continue 00:05:18.430 14:45:51 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.430 14:45:51 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.430 14:45:51 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:18.430 14:45:51 -- setup/common.sh@32 -- # continue 00:05:18.430 14:45:51 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.430 14:45:51 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.430 14:45:51 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:18.430 14:45:51 -- setup/common.sh@32 -- # continue 00:05:18.430 14:45:51 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.430 14:45:51 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.430 14:45:51 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:18.430 14:45:51 -- setup/common.sh@32 -- # continue 00:05:18.430 14:45:51 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.430 14:45:51 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.430 14:45:51 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:18.430 14:45:51 -- setup/common.sh@32 -- # continue 00:05:18.430 14:45:51 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.430 14:45:51 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.430 14:45:51 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:18.430 14:45:51 -- setup/common.sh@32 -- # continue 00:05:18.430 14:45:51 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.430 14:45:51 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.430 14:45:51 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:18.430 14:45:51 -- setup/common.sh@32 -- # continue 00:05:18.430 14:45:51 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.430 14:45:51 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.430 14:45:51 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:18.430 14:45:51 -- setup/common.sh@32 -- # continue 00:05:18.430 14:45:51 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.430 14:45:51 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.430 14:45:51 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:18.430 14:45:51 -- setup/common.sh@32 -- # continue 00:05:18.430 14:45:51 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.430 14:45:51 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.430 14:45:51 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:18.430 14:45:51 -- setup/common.sh@32 -- # continue 00:05:18.430 14:45:51 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.430 14:45:51 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.430 14:45:51 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:18.430 14:45:51 -- setup/common.sh@32 -- # continue 00:05:18.430 14:45:51 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.430 14:45:51 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.430 14:45:51 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:18.430 14:45:51 -- setup/common.sh@32 -- # continue 00:05:18.430 14:45:51 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.430 14:45:51 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.430 14:45:51 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:18.430 14:45:51 -- setup/common.sh@32 -- # continue 00:05:18.430 14:45:51 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.430 14:45:51 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.430 14:45:51 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:18.430 14:45:51 -- setup/common.sh@32 -- # continue 00:05:18.430 14:45:51 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.430 14:45:51 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.430 14:45:51 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:18.430 14:45:51 -- setup/common.sh@32 -- # continue 00:05:18.430 14:45:51 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.430 14:45:51 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.430 14:45:51 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:18.430 14:45:51 -- setup/common.sh@32 -- # continue 00:05:18.430 14:45:51 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.430 14:45:51 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.430 14:45:51 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:18.430 14:45:51 -- setup/common.sh@32 -- # continue 00:05:18.430 14:45:51 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.430 14:45:51 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.430 14:45:51 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:18.430 14:45:51 -- setup/common.sh@32 -- # continue 00:05:18.430 14:45:51 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.430 14:45:51 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.430 14:45:51 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:18.430 14:45:51 -- setup/common.sh@32 -- # continue 00:05:18.430 14:45:51 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.430 14:45:51 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.430 14:45:51 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:18.430 14:45:51 -- setup/common.sh@32 -- # continue 00:05:18.430 14:45:51 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.430 14:45:51 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.430 14:45:51 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:18.430 14:45:51 -- setup/common.sh@32 -- # continue 00:05:18.430 14:45:51 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.430 14:45:51 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.430 14:45:51 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:18.430 14:45:51 -- setup/common.sh@32 -- # continue 00:05:18.430 14:45:51 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.430 14:45:51 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.430 14:45:51 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:18.430 14:45:51 -- setup/common.sh@32 -- # continue 00:05:18.430 14:45:51 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.431 14:45:51 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.431 14:45:51 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:18.431 14:45:51 -- setup/common.sh@32 -- # continue 00:05:18.431 14:45:51 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.431 14:45:51 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.431 14:45:51 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:18.431 14:45:51 -- setup/common.sh@32 -- # continue 00:05:18.431 14:45:51 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.431 14:45:51 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.431 14:45:51 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:18.431 14:45:51 -- setup/common.sh@32 -- # continue 00:05:18.431 14:45:51 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.431 14:45:51 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.431 14:45:51 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:18.431 14:45:51 -- setup/common.sh@32 -- # continue 00:05:18.431 14:45:51 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.431 14:45:51 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.431 14:45:51 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:18.431 14:45:51 -- setup/common.sh@32 -- # continue 00:05:18.431 14:45:51 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.431 14:45:51 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.431 14:45:51 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:18.431 14:45:51 -- setup/common.sh@32 -- # continue 00:05:18.431 14:45:51 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.431 14:45:51 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.431 14:45:51 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:18.431 14:45:51 -- setup/common.sh@32 -- # continue 00:05:18.431 14:45:51 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.431 14:45:51 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.431 14:45:51 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:18.431 14:45:51 -- setup/common.sh@32 -- # continue 00:05:18.431 14:45:51 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.431 14:45:51 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.431 14:45:51 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:18.431 14:45:51 -- setup/common.sh@32 -- # continue 00:05:18.431 14:45:51 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.431 14:45:51 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.431 14:45:51 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:18.431 14:45:51 -- setup/common.sh@32 -- # continue 00:05:18.431 14:45:51 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.431 14:45:51 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.431 14:45:51 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:18.431 14:45:51 -- setup/common.sh@32 -- # continue 00:05:18.431 14:45:51 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.431 14:45:51 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.431 14:45:51 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:18.431 14:45:51 -- setup/common.sh@32 -- # continue 00:05:18.431 14:45:51 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.431 14:45:51 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.431 14:45:51 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:18.431 14:45:51 -- setup/common.sh@32 -- # continue 00:05:18.431 14:45:51 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.431 14:45:51 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.431 14:45:51 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:18.431 14:45:51 -- setup/common.sh@32 -- # continue 00:05:18.431 14:45:51 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.431 14:45:51 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.431 14:45:51 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:18.431 14:45:51 -- setup/common.sh@32 -- # continue 00:05:18.431 14:45:51 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.431 14:45:51 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.431 14:45:51 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:18.431 14:45:51 -- setup/common.sh@32 -- # continue 00:05:18.431 14:45:51 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.431 14:45:51 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.431 14:45:51 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:18.431 14:45:51 -- setup/common.sh@32 -- # continue 00:05:18.431 14:45:51 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.431 14:45:51 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.431 14:45:51 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:18.431 14:45:51 -- setup/common.sh@33 -- # echo 1024 00:05:18.431 14:45:51 -- setup/common.sh@33 -- # return 0 00:05:18.431 14:45:51 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:18.431 14:45:51 -- setup/hugepages.sh@112 -- # get_nodes 00:05:18.431 14:45:51 -- setup/hugepages.sh@27 -- # local node 00:05:18.431 14:45:51 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:18.431 14:45:51 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:05:18.431 14:45:51 -- setup/hugepages.sh@32 -- # no_nodes=1 00:05:18.431 14:45:51 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:05:18.431 14:45:51 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:05:18.431 14:45:51 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:05:18.431 14:45:51 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:05:18.431 14:45:51 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:18.431 14:45:51 -- setup/common.sh@18 -- # local node=0 00:05:18.431 14:45:51 -- setup/common.sh@19 -- # local var val 00:05:18.431 14:45:51 -- setup/common.sh@20 -- # local mem_f mem 00:05:18.431 14:45:51 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:18.431 14:45:51 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:05:18.431 14:45:51 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:05:18.431 14:45:51 -- setup/common.sh@28 -- # mapfile -t mem 00:05:18.431 14:45:51 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:18.431 14:45:51 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.431 14:45:51 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.431 14:45:51 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239108 kB' 'MemFree: 6529708 kB' 'MemUsed: 5709400 kB' 'SwapCached: 0 kB' 'Active: 495032 kB' 'Inactive: 2753240 kB' 'Active(anon): 125880 kB' 'Inactive(anon): 0 kB' 'Active(file): 369152 kB' 'Inactive(file): 2753240 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'FilePages: 3132884 kB' 'Mapped: 49916 kB' 'AnonPages: 116944 kB' 'Shmem: 10488 kB' 'KernelStack: 6640 kB' 'PageTables: 3852 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 88284 kB' 'Slab: 191388 kB' 'SReclaimable: 88284 kB' 'SUnreclaim: 103104 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:05:18.431 14:45:51 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.431 14:45:51 -- setup/common.sh@32 -- # continue 00:05:18.431 14:45:51 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.431 14:45:51 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.431 14:45:51 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.431 14:45:51 -- setup/common.sh@32 -- # continue 00:05:18.431 14:45:51 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.431 14:45:51 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.431 14:45:51 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.431 14:45:51 -- setup/common.sh@32 -- # continue 00:05:18.431 14:45:51 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.431 14:45:51 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.431 14:45:51 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.431 14:45:51 -- setup/common.sh@32 -- # continue 00:05:18.431 14:45:51 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.431 14:45:51 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.431 14:45:51 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.431 14:45:51 -- setup/common.sh@32 -- # continue 00:05:18.431 14:45:51 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.431 14:45:51 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.431 14:45:51 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.431 14:45:51 -- setup/common.sh@32 -- # continue 00:05:18.431 14:45:51 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.431 14:45:51 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.431 14:45:51 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.431 14:45:51 -- setup/common.sh@32 -- # continue 00:05:18.431 14:45:51 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.431 14:45:51 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.431 14:45:51 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.431 14:45:51 -- setup/common.sh@32 -- # continue 00:05:18.431 14:45:51 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.431 14:45:51 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.431 14:45:51 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.431 14:45:51 -- setup/common.sh@32 -- # continue 00:05:18.431 14:45:51 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.431 14:45:51 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.431 14:45:51 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.431 14:45:51 -- setup/common.sh@32 -- # continue 00:05:18.431 14:45:51 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.431 14:45:51 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.431 14:45:51 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.431 14:45:51 -- setup/common.sh@32 -- # continue 00:05:18.431 14:45:51 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.431 14:45:51 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.431 14:45:51 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.431 14:45:51 -- setup/common.sh@32 -- # continue 00:05:18.431 14:45:51 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.431 14:45:51 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.431 14:45:51 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.431 14:45:51 -- setup/common.sh@32 -- # continue 00:05:18.431 14:45:51 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.431 14:45:51 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.431 14:45:51 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.431 14:45:51 -- setup/common.sh@32 -- # continue 00:05:18.431 14:45:51 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.431 14:45:51 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.431 14:45:51 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.431 14:45:51 -- setup/common.sh@32 -- # continue 00:05:18.432 14:45:51 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.432 14:45:51 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.432 14:45:51 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.432 14:45:51 -- setup/common.sh@32 -- # continue 00:05:18.432 14:45:51 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.432 14:45:51 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.432 14:45:51 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.432 14:45:51 -- setup/common.sh@32 -- # continue 00:05:18.432 14:45:51 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.432 14:45:51 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.432 14:45:51 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.432 14:45:51 -- setup/common.sh@32 -- # continue 00:05:18.432 14:45:51 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.432 14:45:51 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.432 14:45:51 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.432 14:45:51 -- setup/common.sh@32 -- # continue 00:05:18.432 14:45:51 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.432 14:45:51 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.432 14:45:51 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.432 14:45:51 -- setup/common.sh@32 -- # continue 00:05:18.432 14:45:51 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.432 14:45:51 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.432 14:45:51 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.432 14:45:51 -- setup/common.sh@32 -- # continue 00:05:18.432 14:45:51 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.432 14:45:51 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.432 14:45:51 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.432 14:45:51 -- setup/common.sh@32 -- # continue 00:05:18.432 14:45:51 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.432 14:45:51 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.432 14:45:51 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.432 14:45:51 -- setup/common.sh@32 -- # continue 00:05:18.432 14:45:51 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.432 14:45:51 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.432 14:45:51 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.432 14:45:51 -- setup/common.sh@32 -- # continue 00:05:18.432 14:45:51 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.432 14:45:51 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.432 14:45:51 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.432 14:45:51 -- setup/common.sh@32 -- # continue 00:05:18.432 14:45:51 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.432 14:45:51 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.432 14:45:51 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.432 14:45:51 -- setup/common.sh@32 -- # continue 00:05:18.432 14:45:51 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.432 14:45:51 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.432 14:45:51 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.432 14:45:51 -- setup/common.sh@32 -- # continue 00:05:18.432 14:45:51 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.432 14:45:51 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.432 14:45:51 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.432 14:45:51 -- setup/common.sh@32 -- # continue 00:05:18.432 14:45:51 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.432 14:45:51 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.432 14:45:51 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.432 14:45:51 -- setup/common.sh@32 -- # continue 00:05:18.432 14:45:51 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.432 14:45:51 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.432 14:45:51 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.432 14:45:51 -- setup/common.sh@32 -- # continue 00:05:18.432 14:45:51 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.432 14:45:51 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.432 14:45:51 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.432 14:45:51 -- setup/common.sh@32 -- # continue 00:05:18.432 14:45:51 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.432 14:45:51 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.432 14:45:51 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.432 14:45:51 -- setup/common.sh@32 -- # continue 00:05:18.432 14:45:51 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.432 14:45:51 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.432 14:45:51 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.432 14:45:51 -- setup/common.sh@32 -- # continue 00:05:18.432 14:45:51 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.432 14:45:51 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.432 14:45:51 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.432 14:45:51 -- setup/common.sh@32 -- # continue 00:05:18.432 14:45:51 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.432 14:45:51 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.432 14:45:51 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.432 14:45:51 -- setup/common.sh@32 -- # continue 00:05:18.432 14:45:51 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.432 14:45:51 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.432 14:45:51 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.432 14:45:51 -- setup/common.sh@32 -- # continue 00:05:18.432 14:45:51 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.432 14:45:51 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.432 14:45:51 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.432 14:45:51 -- setup/common.sh@33 -- # echo 0 00:05:18.432 14:45:51 -- setup/common.sh@33 -- # return 0 00:05:18.432 14:45:51 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:05:18.432 14:45:51 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:05:18.432 node0=1024 expecting 1024 00:05:18.432 14:45:51 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:05:18.432 14:45:51 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:05:18.432 14:45:51 -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:05:18.432 14:45:51 -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:05:18.432 14:45:51 -- setup/hugepages.sh@202 -- # CLEAR_HUGE=no 00:05:18.432 14:45:51 -- setup/hugepages.sh@202 -- # NRHUGE=512 00:05:18.432 14:45:51 -- setup/hugepages.sh@202 -- # setup output 00:05:18.432 14:45:51 -- setup/common.sh@9 -- # [[ output == output ]] 00:05:18.432 14:45:51 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:19.003 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:19.003 0000:00:06.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:19.003 0000:00:07.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:19.003 INFO: Requested 512 hugepages but 1024 already allocated on node0 00:05:19.003 14:45:51 -- setup/hugepages.sh@204 -- # verify_nr_hugepages 00:05:19.003 14:45:51 -- setup/hugepages.sh@89 -- # local node 00:05:19.003 14:45:51 -- setup/hugepages.sh@90 -- # local sorted_t 00:05:19.003 14:45:51 -- setup/hugepages.sh@91 -- # local sorted_s 00:05:19.003 14:45:51 -- setup/hugepages.sh@92 -- # local surp 00:05:19.003 14:45:51 -- setup/hugepages.sh@93 -- # local resv 00:05:19.003 14:45:51 -- setup/hugepages.sh@94 -- # local anon 00:05:19.003 14:45:51 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:05:19.003 14:45:51 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:05:19.003 14:45:51 -- setup/common.sh@17 -- # local get=AnonHugePages 00:05:19.003 14:45:51 -- setup/common.sh@18 -- # local node= 00:05:19.003 14:45:51 -- setup/common.sh@19 -- # local var val 00:05:19.003 14:45:51 -- setup/common.sh@20 -- # local mem_f mem 00:05:19.003 14:45:51 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:19.003 14:45:51 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:19.003 14:45:51 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:19.003 14:45:51 -- setup/common.sh@28 -- # mapfile -t mem 00:05:19.003 14:45:51 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:19.003 14:45:51 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.003 14:45:51 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239108 kB' 'MemFree: 6531308 kB' 'MemAvailable: 9460892 kB' 'Buffers: 2684 kB' 'Cached: 3130200 kB' 'SwapCached: 0 kB' 'Active: 495380 kB' 'Inactive: 2753244 kB' 'Active(anon): 126228 kB' 'Inactive(anon): 0 kB' 'Active(file): 369152 kB' 'Inactive(file): 2753244 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'AnonPages: 117384 kB' 'Mapped: 50012 kB' 'Shmem: 10488 kB' 'KReclaimable: 88272 kB' 'Slab: 191248 kB' 'SReclaimable: 88272 kB' 'SUnreclaim: 102976 kB' 'KernelStack: 6680 kB' 'PageTables: 4100 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459580 kB' 'Committed_AS: 305028 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55432 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 206700 kB' 'DirectMap2M: 6084608 kB' 'DirectMap1G: 8388608 kB' 00:05:19.003 14:45:51 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.003 14:45:51 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:19.004 14:45:51 -- setup/common.sh@32 -- # continue 00:05:19.004 14:45:51 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.004 14:45:51 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.004 14:45:51 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:19.004 14:45:51 -- setup/common.sh@32 -- # continue 00:05:19.004 14:45:51 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.004 14:45:51 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.004 14:45:51 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:19.004 14:45:51 -- setup/common.sh@32 -- # continue 00:05:19.004 14:45:51 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.004 14:45:51 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.004 14:45:51 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:19.004 14:45:51 -- setup/common.sh@32 -- # continue 00:05:19.004 14:45:51 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.004 14:45:51 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.004 14:45:51 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:19.004 14:45:51 -- setup/common.sh@32 -- # continue 00:05:19.004 14:45:51 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.004 14:45:51 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.004 14:45:51 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:19.004 14:45:51 -- setup/common.sh@32 -- # continue 00:05:19.004 14:45:51 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.004 14:45:51 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.004 14:45:51 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:19.004 14:45:51 -- setup/common.sh@32 -- # continue 00:05:19.004 14:45:51 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.004 14:45:51 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.004 14:45:51 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:19.004 14:45:51 -- setup/common.sh@32 -- # continue 00:05:19.004 14:45:51 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.004 14:45:51 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.004 14:45:51 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:19.004 14:45:51 -- setup/common.sh@32 -- # continue 00:05:19.004 14:45:51 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.004 14:45:51 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.004 14:45:51 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:19.004 14:45:51 -- setup/common.sh@32 -- # continue 00:05:19.004 14:45:51 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.004 14:45:51 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.004 14:45:51 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:19.004 14:45:51 -- setup/common.sh@32 -- # continue 00:05:19.004 14:45:51 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.004 14:45:51 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.004 14:45:51 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:19.004 14:45:51 -- setup/common.sh@32 -- # continue 00:05:19.004 14:45:51 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.004 14:45:51 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.004 14:45:51 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:19.004 14:45:51 -- setup/common.sh@32 -- # continue 00:05:19.004 14:45:51 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.004 14:45:51 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.004 14:45:51 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:19.004 14:45:51 -- setup/common.sh@32 -- # continue 00:05:19.004 14:45:51 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.004 14:45:51 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.004 14:45:51 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:19.004 14:45:51 -- setup/common.sh@32 -- # continue 00:05:19.004 14:45:51 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.004 14:45:51 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.004 14:45:51 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:19.004 14:45:51 -- setup/common.sh@32 -- # continue 00:05:19.004 14:45:51 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.004 14:45:51 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.004 14:45:51 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:19.004 14:45:51 -- setup/common.sh@32 -- # continue 00:05:19.004 14:45:51 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.004 14:45:51 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.004 14:45:51 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:19.004 14:45:51 -- setup/common.sh@32 -- # continue 00:05:19.004 14:45:51 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.004 14:45:51 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.004 14:45:51 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:19.004 14:45:51 -- setup/common.sh@32 -- # continue 00:05:19.004 14:45:51 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.004 14:45:51 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.004 14:45:51 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:19.004 14:45:51 -- setup/common.sh@32 -- # continue 00:05:19.004 14:45:51 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.004 14:45:51 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.004 14:45:51 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:19.004 14:45:51 -- setup/common.sh@32 -- # continue 00:05:19.004 14:45:51 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.004 14:45:51 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.004 14:45:51 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:19.004 14:45:51 -- setup/common.sh@32 -- # continue 00:05:19.004 14:45:51 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.004 14:45:51 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.004 14:45:51 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:19.004 14:45:51 -- setup/common.sh@32 -- # continue 00:05:19.004 14:45:51 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.004 14:45:51 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.004 14:45:51 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:19.004 14:45:51 -- setup/common.sh@32 -- # continue 00:05:19.004 14:45:51 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.004 14:45:51 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.004 14:45:51 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:19.004 14:45:51 -- setup/common.sh@32 -- # continue 00:05:19.004 14:45:51 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.004 14:45:51 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.004 14:45:51 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:19.004 14:45:51 -- setup/common.sh@32 -- # continue 00:05:19.004 14:45:51 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.004 14:45:51 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.004 14:45:51 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:19.004 14:45:51 -- setup/common.sh@32 -- # continue 00:05:19.004 14:45:51 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.004 14:45:51 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.004 14:45:51 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:19.004 14:45:51 -- setup/common.sh@32 -- # continue 00:05:19.004 14:45:51 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.004 14:45:51 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.004 14:45:51 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:19.004 14:45:51 -- setup/common.sh@32 -- # continue 00:05:19.004 14:45:51 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.004 14:45:51 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.004 14:45:51 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:19.004 14:45:51 -- setup/common.sh@32 -- # continue 00:05:19.004 14:45:51 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.004 14:45:51 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.004 14:45:51 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:19.004 14:45:51 -- setup/common.sh@32 -- # continue 00:05:19.004 14:45:51 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.004 14:45:51 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.004 14:45:51 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:19.004 14:45:51 -- setup/common.sh@32 -- # continue 00:05:19.004 14:45:51 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.004 14:45:51 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.004 14:45:51 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:19.004 14:45:51 -- setup/common.sh@32 -- # continue 00:05:19.004 14:45:51 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.004 14:45:51 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.004 14:45:51 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:19.004 14:45:51 -- setup/common.sh@32 -- # continue 00:05:19.004 14:45:51 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.004 14:45:51 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.004 14:45:51 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:19.004 14:45:51 -- setup/common.sh@32 -- # continue 00:05:19.004 14:45:51 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.004 14:45:51 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.004 14:45:51 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:19.004 14:45:51 -- setup/common.sh@32 -- # continue 00:05:19.004 14:45:51 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.004 14:45:51 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.004 14:45:51 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:19.004 14:45:51 -- setup/common.sh@32 -- # continue 00:05:19.004 14:45:51 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.004 14:45:51 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.004 14:45:51 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:19.004 14:45:51 -- setup/common.sh@32 -- # continue 00:05:19.004 14:45:51 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.004 14:45:51 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.004 14:45:51 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:19.004 14:45:51 -- setup/common.sh@32 -- # continue 00:05:19.004 14:45:51 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.004 14:45:51 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.004 14:45:51 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:19.004 14:45:51 -- setup/common.sh@32 -- # continue 00:05:19.004 14:45:51 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.004 14:45:51 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.004 14:45:51 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:19.004 14:45:51 -- setup/common.sh@33 -- # echo 0 00:05:19.004 14:45:51 -- setup/common.sh@33 -- # return 0 00:05:19.004 14:45:51 -- setup/hugepages.sh@97 -- # anon=0 00:05:19.004 14:45:51 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:05:19.004 14:45:51 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:19.005 14:45:51 -- setup/common.sh@18 -- # local node= 00:05:19.005 14:45:51 -- setup/common.sh@19 -- # local var val 00:05:19.005 14:45:51 -- setup/common.sh@20 -- # local mem_f mem 00:05:19.005 14:45:51 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:19.005 14:45:51 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:19.005 14:45:51 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:19.005 14:45:51 -- setup/common.sh@28 -- # mapfile -t mem 00:05:19.005 14:45:51 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:19.005 14:45:51 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.005 14:45:51 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239108 kB' 'MemFree: 6531308 kB' 'MemAvailable: 9460892 kB' 'Buffers: 2684 kB' 'Cached: 3130200 kB' 'SwapCached: 0 kB' 'Active: 495252 kB' 'Inactive: 2753244 kB' 'Active(anon): 126100 kB' 'Inactive(anon): 0 kB' 'Active(file): 369152 kB' 'Inactive(file): 2753244 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'AnonPages: 117184 kB' 'Mapped: 50012 kB' 'Shmem: 10488 kB' 'KReclaimable: 88272 kB' 'Slab: 191248 kB' 'SReclaimable: 88272 kB' 'SUnreclaim: 102976 kB' 'KernelStack: 6632 kB' 'PageTables: 3944 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459580 kB' 'Committed_AS: 305028 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55368 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 206700 kB' 'DirectMap2M: 6084608 kB' 'DirectMap1G: 8388608 kB' 00:05:19.005 14:45:51 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.005 14:45:51 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.005 14:45:51 -- setup/common.sh@32 -- # continue 00:05:19.005 14:45:51 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.005 14:45:51 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.005 14:45:51 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.005 14:45:51 -- setup/common.sh@32 -- # continue 00:05:19.005 14:45:51 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.005 14:45:51 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.005 14:45:51 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.005 14:45:51 -- setup/common.sh@32 -- # continue 00:05:19.005 14:45:51 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.005 14:45:51 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.005 14:45:51 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.005 14:45:51 -- setup/common.sh@32 -- # continue 00:05:19.005 14:45:51 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.005 14:45:51 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.005 14:45:51 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.005 14:45:51 -- setup/common.sh@32 -- # continue 00:05:19.005 14:45:51 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.005 14:45:51 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.005 14:45:51 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.005 14:45:51 -- setup/common.sh@32 -- # continue 00:05:19.005 14:45:51 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.005 14:45:51 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.005 14:45:51 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.005 14:45:51 -- setup/common.sh@32 -- # continue 00:05:19.005 14:45:51 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.005 14:45:51 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.005 14:45:51 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.005 14:45:51 -- setup/common.sh@32 -- # continue 00:05:19.005 14:45:51 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.005 14:45:51 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.005 14:45:51 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.005 14:45:51 -- setup/common.sh@32 -- # continue 00:05:19.005 14:45:51 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.005 14:45:51 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.005 14:45:51 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.005 14:45:51 -- setup/common.sh@32 -- # continue 00:05:19.005 14:45:51 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.005 14:45:51 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.005 14:45:51 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.005 14:45:51 -- setup/common.sh@32 -- # continue 00:05:19.005 14:45:51 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.005 14:45:51 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.005 14:45:51 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.005 14:45:51 -- setup/common.sh@32 -- # continue 00:05:19.005 14:45:51 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.005 14:45:51 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.005 14:45:51 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.005 14:45:51 -- setup/common.sh@32 -- # continue 00:05:19.005 14:45:51 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.005 14:45:51 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.005 14:45:51 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.005 14:45:51 -- setup/common.sh@32 -- # continue 00:05:19.005 14:45:51 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.005 14:45:51 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.005 14:45:51 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.005 14:45:51 -- setup/common.sh@32 -- # continue 00:05:19.005 14:45:51 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.005 14:45:51 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.005 14:45:51 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.005 14:45:51 -- setup/common.sh@32 -- # continue 00:05:19.005 14:45:51 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.005 14:45:51 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.005 14:45:51 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.005 14:45:51 -- setup/common.sh@32 -- # continue 00:05:19.005 14:45:51 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.005 14:45:51 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.005 14:45:51 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.005 14:45:51 -- setup/common.sh@32 -- # continue 00:05:19.005 14:45:51 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.005 14:45:51 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.005 14:45:51 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.005 14:45:51 -- setup/common.sh@32 -- # continue 00:05:19.005 14:45:51 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.005 14:45:51 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.005 14:45:51 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.005 14:45:51 -- setup/common.sh@32 -- # continue 00:05:19.005 14:45:51 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.005 14:45:51 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.005 14:45:51 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.005 14:45:51 -- setup/common.sh@32 -- # continue 00:05:19.005 14:45:51 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.005 14:45:51 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.005 14:45:51 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.005 14:45:51 -- setup/common.sh@32 -- # continue 00:05:19.005 14:45:51 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.005 14:45:51 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.005 14:45:51 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.005 14:45:51 -- setup/common.sh@32 -- # continue 00:05:19.005 14:45:51 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.005 14:45:51 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.005 14:45:51 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.005 14:45:51 -- setup/common.sh@32 -- # continue 00:05:19.005 14:45:51 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.005 14:45:51 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.005 14:45:51 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.005 14:45:51 -- setup/common.sh@32 -- # continue 00:05:19.005 14:45:51 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.005 14:45:51 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.005 14:45:51 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.005 14:45:51 -- setup/common.sh@32 -- # continue 00:05:19.005 14:45:51 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.005 14:45:51 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.005 14:45:51 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.005 14:45:51 -- setup/common.sh@32 -- # continue 00:05:19.005 14:45:51 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.005 14:45:51 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.005 14:45:51 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.005 14:45:51 -- setup/common.sh@32 -- # continue 00:05:19.005 14:45:51 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.005 14:45:51 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.005 14:45:51 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.005 14:45:51 -- setup/common.sh@32 -- # continue 00:05:19.005 14:45:51 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.005 14:45:51 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.005 14:45:51 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.005 14:45:51 -- setup/common.sh@32 -- # continue 00:05:19.005 14:45:51 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.005 14:45:51 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.005 14:45:51 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.005 14:45:51 -- setup/common.sh@32 -- # continue 00:05:19.005 14:45:51 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.005 14:45:51 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.005 14:45:51 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.005 14:45:51 -- setup/common.sh@32 -- # continue 00:05:19.005 14:45:51 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.005 14:45:51 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.005 14:45:51 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.005 14:45:51 -- setup/common.sh@32 -- # continue 00:05:19.005 14:45:51 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.005 14:45:51 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.005 14:45:51 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.005 14:45:51 -- setup/common.sh@32 -- # continue 00:05:19.005 14:45:51 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.006 14:45:51 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.006 14:45:51 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.006 14:45:51 -- setup/common.sh@32 -- # continue 00:05:19.006 14:45:51 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.006 14:45:51 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.006 14:45:51 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.006 14:45:51 -- setup/common.sh@32 -- # continue 00:05:19.006 14:45:51 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.006 14:45:51 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.006 14:45:51 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.006 14:45:51 -- setup/common.sh@32 -- # continue 00:05:19.006 14:45:51 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.006 14:45:51 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.006 14:45:51 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.006 14:45:51 -- setup/common.sh@32 -- # continue 00:05:19.006 14:45:51 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.006 14:45:51 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.006 14:45:51 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.006 14:45:51 -- setup/common.sh@32 -- # continue 00:05:19.006 14:45:51 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.006 14:45:51 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.006 14:45:51 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.006 14:45:52 -- setup/common.sh@32 -- # continue 00:05:19.006 14:45:52 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.006 14:45:52 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.006 14:45:52 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.006 14:45:52 -- setup/common.sh@32 -- # continue 00:05:19.006 14:45:52 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.006 14:45:52 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.006 14:45:52 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.006 14:45:52 -- setup/common.sh@32 -- # continue 00:05:19.006 14:45:52 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.006 14:45:52 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.006 14:45:52 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.006 14:45:52 -- setup/common.sh@32 -- # continue 00:05:19.006 14:45:52 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.006 14:45:52 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.006 14:45:52 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.006 14:45:52 -- setup/common.sh@32 -- # continue 00:05:19.006 14:45:52 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.006 14:45:52 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.006 14:45:52 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.006 14:45:52 -- setup/common.sh@32 -- # continue 00:05:19.006 14:45:52 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.006 14:45:52 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.006 14:45:52 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.006 14:45:52 -- setup/common.sh@32 -- # continue 00:05:19.006 14:45:52 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.006 14:45:52 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.006 14:45:52 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.006 14:45:52 -- setup/common.sh@32 -- # continue 00:05:19.006 14:45:52 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.006 14:45:52 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.006 14:45:52 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.006 14:45:52 -- setup/common.sh@32 -- # continue 00:05:19.006 14:45:52 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.006 14:45:52 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.006 14:45:52 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.006 14:45:52 -- setup/common.sh@32 -- # continue 00:05:19.006 14:45:52 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.006 14:45:52 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.006 14:45:52 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.006 14:45:52 -- setup/common.sh@32 -- # continue 00:05:19.006 14:45:52 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.006 14:45:52 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.006 14:45:52 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.006 14:45:52 -- setup/common.sh@32 -- # continue 00:05:19.006 14:45:52 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.006 14:45:52 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.006 14:45:52 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.006 14:45:52 -- setup/common.sh@33 -- # echo 0 00:05:19.006 14:45:52 -- setup/common.sh@33 -- # return 0 00:05:19.006 14:45:52 -- setup/hugepages.sh@99 -- # surp=0 00:05:19.006 14:45:52 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:05:19.006 14:45:52 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:05:19.006 14:45:52 -- setup/common.sh@18 -- # local node= 00:05:19.006 14:45:52 -- setup/common.sh@19 -- # local var val 00:05:19.006 14:45:52 -- setup/common.sh@20 -- # local mem_f mem 00:05:19.006 14:45:52 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:19.006 14:45:52 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:19.006 14:45:52 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:19.006 14:45:52 -- setup/common.sh@28 -- # mapfile -t mem 00:05:19.006 14:45:52 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:19.006 14:45:52 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.006 14:45:52 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.006 14:45:52 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239108 kB' 'MemFree: 6531716 kB' 'MemAvailable: 9461300 kB' 'Buffers: 2684 kB' 'Cached: 3130200 kB' 'SwapCached: 0 kB' 'Active: 495316 kB' 'Inactive: 2753244 kB' 'Active(anon): 126164 kB' 'Inactive(anon): 0 kB' 'Active(file): 369152 kB' 'Inactive(file): 2753244 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'AnonPages: 117268 kB' 'Mapped: 50176 kB' 'Shmem: 10488 kB' 'KReclaimable: 88272 kB' 'Slab: 191240 kB' 'SReclaimable: 88272 kB' 'SUnreclaim: 102968 kB' 'KernelStack: 6656 kB' 'PageTables: 3892 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459580 kB' 'Committed_AS: 307320 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55368 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 206700 kB' 'DirectMap2M: 6084608 kB' 'DirectMap1G: 8388608 kB' 00:05:19.006 14:45:52 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.006 14:45:52 -- setup/common.sh@32 -- # continue 00:05:19.006 14:45:52 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.006 14:45:52 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.006 14:45:52 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.006 14:45:52 -- setup/common.sh@32 -- # continue 00:05:19.006 14:45:52 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.006 14:45:52 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.006 14:45:52 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.006 14:45:52 -- setup/common.sh@32 -- # continue 00:05:19.006 14:45:52 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.006 14:45:52 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.006 14:45:52 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.006 14:45:52 -- setup/common.sh@32 -- # continue 00:05:19.006 14:45:52 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.006 14:45:52 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.006 14:45:52 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.006 14:45:52 -- setup/common.sh@32 -- # continue 00:05:19.006 14:45:52 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.006 14:45:52 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.006 14:45:52 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.006 14:45:52 -- setup/common.sh@32 -- # continue 00:05:19.006 14:45:52 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.006 14:45:52 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.006 14:45:52 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.006 14:45:52 -- setup/common.sh@32 -- # continue 00:05:19.006 14:45:52 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.006 14:45:52 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.006 14:45:52 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.006 14:45:52 -- setup/common.sh@32 -- # continue 00:05:19.006 14:45:52 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.006 14:45:52 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.006 14:45:52 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.006 14:45:52 -- setup/common.sh@32 -- # continue 00:05:19.006 14:45:52 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.006 14:45:52 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.006 14:45:52 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.006 14:45:52 -- setup/common.sh@32 -- # continue 00:05:19.006 14:45:52 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.006 14:45:52 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.006 14:45:52 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.006 14:45:52 -- setup/common.sh@32 -- # continue 00:05:19.006 14:45:52 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.006 14:45:52 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.006 14:45:52 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.006 14:45:52 -- setup/common.sh@32 -- # continue 00:05:19.006 14:45:52 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.006 14:45:52 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.006 14:45:52 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.006 14:45:52 -- setup/common.sh@32 -- # continue 00:05:19.006 14:45:52 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.006 14:45:52 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.006 14:45:52 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.006 14:45:52 -- setup/common.sh@32 -- # continue 00:05:19.006 14:45:52 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.006 14:45:52 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.006 14:45:52 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.006 14:45:52 -- setup/common.sh@32 -- # continue 00:05:19.006 14:45:52 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.006 14:45:52 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.006 14:45:52 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.006 14:45:52 -- setup/common.sh@32 -- # continue 00:05:19.007 14:45:52 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.007 14:45:52 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.007 14:45:52 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.007 14:45:52 -- setup/common.sh@32 -- # continue 00:05:19.007 14:45:52 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.007 14:45:52 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.007 14:45:52 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.007 14:45:52 -- setup/common.sh@32 -- # continue 00:05:19.007 14:45:52 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.007 14:45:52 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.007 14:45:52 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.007 14:45:52 -- setup/common.sh@32 -- # continue 00:05:19.007 14:45:52 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.007 14:45:52 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.007 14:45:52 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.007 14:45:52 -- setup/common.sh@32 -- # continue 00:05:19.007 14:45:52 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.007 14:45:52 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.007 14:45:52 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.007 14:45:52 -- setup/common.sh@32 -- # continue 00:05:19.007 14:45:52 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.007 14:45:52 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.007 14:45:52 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.007 14:45:52 -- setup/common.sh@32 -- # continue 00:05:19.007 14:45:52 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.007 14:45:52 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.007 14:45:52 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.007 14:45:52 -- setup/common.sh@32 -- # continue 00:05:19.007 14:45:52 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.007 14:45:52 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.007 14:45:52 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.007 14:45:52 -- setup/common.sh@32 -- # continue 00:05:19.007 14:45:52 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.007 14:45:52 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.007 14:45:52 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.007 14:45:52 -- setup/common.sh@32 -- # continue 00:05:19.007 14:45:52 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.007 14:45:52 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.007 14:45:52 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.007 14:45:52 -- setup/common.sh@32 -- # continue 00:05:19.007 14:45:52 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.007 14:45:52 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.007 14:45:52 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.007 14:45:52 -- setup/common.sh@32 -- # continue 00:05:19.007 14:45:52 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.007 14:45:52 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.007 14:45:52 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.007 14:45:52 -- setup/common.sh@32 -- # continue 00:05:19.007 14:45:52 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.007 14:45:52 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.007 14:45:52 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.007 14:45:52 -- setup/common.sh@32 -- # continue 00:05:19.007 14:45:52 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.007 14:45:52 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.007 14:45:52 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.007 14:45:52 -- setup/common.sh@32 -- # continue 00:05:19.007 14:45:52 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.007 14:45:52 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.007 14:45:52 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.007 14:45:52 -- setup/common.sh@32 -- # continue 00:05:19.007 14:45:52 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.007 14:45:52 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.007 14:45:52 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.007 14:45:52 -- setup/common.sh@32 -- # continue 00:05:19.007 14:45:52 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.007 14:45:52 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.007 14:45:52 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.007 14:45:52 -- setup/common.sh@32 -- # continue 00:05:19.007 14:45:52 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.007 14:45:52 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.007 14:45:52 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.007 14:45:52 -- setup/common.sh@32 -- # continue 00:05:19.007 14:45:52 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.007 14:45:52 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.007 14:45:52 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.007 14:45:52 -- setup/common.sh@32 -- # continue 00:05:19.007 14:45:52 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.007 14:45:52 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.007 14:45:52 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.007 14:45:52 -- setup/common.sh@32 -- # continue 00:05:19.007 14:45:52 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.007 14:45:52 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.007 14:45:52 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.007 14:45:52 -- setup/common.sh@32 -- # continue 00:05:19.007 14:45:52 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.007 14:45:52 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.007 14:45:52 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.007 14:45:52 -- setup/common.sh@32 -- # continue 00:05:19.007 14:45:52 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.007 14:45:52 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.007 14:45:52 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.007 14:45:52 -- setup/common.sh@32 -- # continue 00:05:19.007 14:45:52 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.007 14:45:52 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.007 14:45:52 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.007 14:45:52 -- setup/common.sh@32 -- # continue 00:05:19.007 14:45:52 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.007 14:45:52 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.007 14:45:52 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.007 14:45:52 -- setup/common.sh@32 -- # continue 00:05:19.007 14:45:52 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.007 14:45:52 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.007 14:45:52 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.007 14:45:52 -- setup/common.sh@32 -- # continue 00:05:19.007 14:45:52 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.007 14:45:52 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.007 14:45:52 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.007 14:45:52 -- setup/common.sh@32 -- # continue 00:05:19.007 14:45:52 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.007 14:45:52 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.007 14:45:52 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.007 14:45:52 -- setup/common.sh@32 -- # continue 00:05:19.007 14:45:52 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.007 14:45:52 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.007 14:45:52 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.007 14:45:52 -- setup/common.sh@32 -- # continue 00:05:19.007 14:45:52 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.007 14:45:52 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.007 14:45:52 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.007 14:45:52 -- setup/common.sh@32 -- # continue 00:05:19.007 14:45:52 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.007 14:45:52 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.007 14:45:52 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.007 14:45:52 -- setup/common.sh@32 -- # continue 00:05:19.007 14:45:52 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.007 14:45:52 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.007 14:45:52 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.007 14:45:52 -- setup/common.sh@32 -- # continue 00:05:19.007 14:45:52 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.007 14:45:52 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.007 14:45:52 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.007 14:45:52 -- setup/common.sh@32 -- # continue 00:05:19.007 14:45:52 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.007 14:45:52 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.007 14:45:52 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.007 14:45:52 -- setup/common.sh@32 -- # continue 00:05:19.007 14:45:52 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.007 14:45:52 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.007 14:45:52 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.007 14:45:52 -- setup/common.sh@33 -- # echo 0 00:05:19.007 14:45:52 -- setup/common.sh@33 -- # return 0 00:05:19.007 14:45:52 -- setup/hugepages.sh@100 -- # resv=0 00:05:19.007 nr_hugepages=1024 00:05:19.007 14:45:52 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:05:19.007 resv_hugepages=0 00:05:19.007 14:45:52 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:05:19.007 surplus_hugepages=0 00:05:19.007 14:45:52 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:05:19.007 anon_hugepages=0 00:05:19.007 14:45:52 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:05:19.007 14:45:52 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:19.007 14:45:52 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:05:19.007 14:45:52 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:05:19.007 14:45:52 -- setup/common.sh@17 -- # local get=HugePages_Total 00:05:19.007 14:45:52 -- setup/common.sh@18 -- # local node= 00:05:19.007 14:45:52 -- setup/common.sh@19 -- # local var val 00:05:19.007 14:45:52 -- setup/common.sh@20 -- # local mem_f mem 00:05:19.007 14:45:52 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:19.007 14:45:52 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:19.007 14:45:52 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:19.007 14:45:52 -- setup/common.sh@28 -- # mapfile -t mem 00:05:19.007 14:45:52 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:19.007 14:45:52 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.007 14:45:52 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.008 14:45:52 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239108 kB' 'MemFree: 6531716 kB' 'MemAvailable: 9461300 kB' 'Buffers: 2684 kB' 'Cached: 3130200 kB' 'SwapCached: 0 kB' 'Active: 494956 kB' 'Inactive: 2753244 kB' 'Active(anon): 125804 kB' 'Inactive(anon): 0 kB' 'Active(file): 369152 kB' 'Inactive(file): 2753244 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'AnonPages: 117016 kB' 'Mapped: 49916 kB' 'Shmem: 10488 kB' 'KReclaimable: 88272 kB' 'Slab: 191244 kB' 'SReclaimable: 88272 kB' 'SUnreclaim: 102972 kB' 'KernelStack: 6656 kB' 'PageTables: 3912 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459580 kB' 'Committed_AS: 305028 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55352 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 206700 kB' 'DirectMap2M: 6084608 kB' 'DirectMap1G: 8388608 kB' 00:05:19.008 14:45:52 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.008 14:45:52 -- setup/common.sh@32 -- # continue 00:05:19.008 14:45:52 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.008 14:45:52 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.008 14:45:52 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.008 14:45:52 -- setup/common.sh@32 -- # continue 00:05:19.008 14:45:52 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.008 14:45:52 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.008 14:45:52 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.008 14:45:52 -- setup/common.sh@32 -- # continue 00:05:19.008 14:45:52 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.008 14:45:52 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.008 14:45:52 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.008 14:45:52 -- setup/common.sh@32 -- # continue 00:05:19.008 14:45:52 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.008 14:45:52 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.008 14:45:52 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.008 14:45:52 -- setup/common.sh@32 -- # continue 00:05:19.008 14:45:52 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.008 14:45:52 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.008 14:45:52 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.008 14:45:52 -- setup/common.sh@32 -- # continue 00:05:19.008 14:45:52 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.008 14:45:52 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.008 14:45:52 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.008 14:45:52 -- setup/common.sh@32 -- # continue 00:05:19.008 14:45:52 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.008 14:45:52 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.008 14:45:52 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.008 14:45:52 -- setup/common.sh@32 -- # continue 00:05:19.008 14:45:52 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.008 14:45:52 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.008 14:45:52 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.008 14:45:52 -- setup/common.sh@32 -- # continue 00:05:19.008 14:45:52 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.008 14:45:52 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.008 14:45:52 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.008 14:45:52 -- setup/common.sh@32 -- # continue 00:05:19.008 14:45:52 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.008 14:45:52 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.008 14:45:52 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.008 14:45:52 -- setup/common.sh@32 -- # continue 00:05:19.008 14:45:52 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.008 14:45:52 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.008 14:45:52 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.008 14:45:52 -- setup/common.sh@32 -- # continue 00:05:19.008 14:45:52 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.008 14:45:52 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.008 14:45:52 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.008 14:45:52 -- setup/common.sh@32 -- # continue 00:05:19.008 14:45:52 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.008 14:45:52 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.008 14:45:52 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.008 14:45:52 -- setup/common.sh@32 -- # continue 00:05:19.008 14:45:52 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.008 14:45:52 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.008 14:45:52 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.008 14:45:52 -- setup/common.sh@32 -- # continue 00:05:19.008 14:45:52 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.008 14:45:52 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.008 14:45:52 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.008 14:45:52 -- setup/common.sh@32 -- # continue 00:05:19.008 14:45:52 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.008 14:45:52 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.008 14:45:52 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.008 14:45:52 -- setup/common.sh@32 -- # continue 00:05:19.008 14:45:52 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.008 14:45:52 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.008 14:45:52 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.008 14:45:52 -- setup/common.sh@32 -- # continue 00:05:19.008 14:45:52 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.008 14:45:52 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.008 14:45:52 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.008 14:45:52 -- setup/common.sh@32 -- # continue 00:05:19.008 14:45:52 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.008 14:45:52 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.008 14:45:52 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.008 14:45:52 -- setup/common.sh@32 -- # continue 00:05:19.008 14:45:52 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.008 14:45:52 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.008 14:45:52 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.008 14:45:52 -- setup/common.sh@32 -- # continue 00:05:19.008 14:45:52 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.008 14:45:52 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.008 14:45:52 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.008 14:45:52 -- setup/common.sh@32 -- # continue 00:05:19.008 14:45:52 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.008 14:45:52 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.008 14:45:52 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.008 14:45:52 -- setup/common.sh@32 -- # continue 00:05:19.008 14:45:52 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.008 14:45:52 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.008 14:45:52 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.008 14:45:52 -- setup/common.sh@32 -- # continue 00:05:19.008 14:45:52 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.008 14:45:52 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.008 14:45:52 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.008 14:45:52 -- setup/common.sh@32 -- # continue 00:05:19.008 14:45:52 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.008 14:45:52 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.008 14:45:52 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.008 14:45:52 -- setup/common.sh@32 -- # continue 00:05:19.008 14:45:52 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.008 14:45:52 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.008 14:45:52 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.008 14:45:52 -- setup/common.sh@32 -- # continue 00:05:19.008 14:45:52 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.008 14:45:52 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.008 14:45:52 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.008 14:45:52 -- setup/common.sh@32 -- # continue 00:05:19.008 14:45:52 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.008 14:45:52 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.008 14:45:52 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.008 14:45:52 -- setup/common.sh@32 -- # continue 00:05:19.008 14:45:52 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.008 14:45:52 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.008 14:45:52 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.008 14:45:52 -- setup/common.sh@32 -- # continue 00:05:19.008 14:45:52 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.008 14:45:52 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.008 14:45:52 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.008 14:45:52 -- setup/common.sh@32 -- # continue 00:05:19.008 14:45:52 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.008 14:45:52 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.008 14:45:52 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.009 14:45:52 -- setup/common.sh@32 -- # continue 00:05:19.009 14:45:52 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.009 14:45:52 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.009 14:45:52 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.009 14:45:52 -- setup/common.sh@32 -- # continue 00:05:19.009 14:45:52 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.009 14:45:52 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.009 14:45:52 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.009 14:45:52 -- setup/common.sh@32 -- # continue 00:05:19.009 14:45:52 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.009 14:45:52 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.009 14:45:52 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.009 14:45:52 -- setup/common.sh@32 -- # continue 00:05:19.009 14:45:52 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.009 14:45:52 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.009 14:45:52 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.009 14:45:52 -- setup/common.sh@32 -- # continue 00:05:19.009 14:45:52 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.009 14:45:52 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.009 14:45:52 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.009 14:45:52 -- setup/common.sh@32 -- # continue 00:05:19.009 14:45:52 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.009 14:45:52 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.009 14:45:52 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.009 14:45:52 -- setup/common.sh@32 -- # continue 00:05:19.009 14:45:52 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.009 14:45:52 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.009 14:45:52 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.009 14:45:52 -- setup/common.sh@32 -- # continue 00:05:19.009 14:45:52 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.009 14:45:52 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.009 14:45:52 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.009 14:45:52 -- setup/common.sh@32 -- # continue 00:05:19.009 14:45:52 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.009 14:45:52 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.009 14:45:52 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.009 14:45:52 -- setup/common.sh@32 -- # continue 00:05:19.009 14:45:52 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.009 14:45:52 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.009 14:45:52 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.009 14:45:52 -- setup/common.sh@32 -- # continue 00:05:19.009 14:45:52 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.009 14:45:52 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.009 14:45:52 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.009 14:45:52 -- setup/common.sh@32 -- # continue 00:05:19.009 14:45:52 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.009 14:45:52 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.009 14:45:52 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.009 14:45:52 -- setup/common.sh@32 -- # continue 00:05:19.009 14:45:52 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.009 14:45:52 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.009 14:45:52 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.009 14:45:52 -- setup/common.sh@32 -- # continue 00:05:19.009 14:45:52 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.009 14:45:52 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.009 14:45:52 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.009 14:45:52 -- setup/common.sh@32 -- # continue 00:05:19.009 14:45:52 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.009 14:45:52 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.009 14:45:52 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.009 14:45:52 -- setup/common.sh@32 -- # continue 00:05:19.009 14:45:52 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.009 14:45:52 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.009 14:45:52 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.009 14:45:52 -- setup/common.sh@32 -- # continue 00:05:19.009 14:45:52 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.009 14:45:52 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.009 14:45:52 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.009 14:45:52 -- setup/common.sh@33 -- # echo 1024 00:05:19.009 14:45:52 -- setup/common.sh@33 -- # return 0 00:05:19.009 14:45:52 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:19.009 14:45:52 -- setup/hugepages.sh@112 -- # get_nodes 00:05:19.009 14:45:52 -- setup/hugepages.sh@27 -- # local node 00:05:19.009 14:45:52 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:19.009 14:45:52 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:05:19.009 14:45:52 -- setup/hugepages.sh@32 -- # no_nodes=1 00:05:19.009 14:45:52 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:05:19.009 14:45:52 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:05:19.009 14:45:52 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:05:19.009 14:45:52 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:05:19.009 14:45:52 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:19.009 14:45:52 -- setup/common.sh@18 -- # local node=0 00:05:19.009 14:45:52 -- setup/common.sh@19 -- # local var val 00:05:19.009 14:45:52 -- setup/common.sh@20 -- # local mem_f mem 00:05:19.009 14:45:52 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:19.009 14:45:52 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:05:19.009 14:45:52 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:05:19.009 14:45:52 -- setup/common.sh@28 -- # mapfile -t mem 00:05:19.009 14:45:52 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:19.009 14:45:52 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.009 14:45:52 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.009 14:45:52 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239108 kB' 'MemFree: 6531716 kB' 'MemUsed: 5707392 kB' 'SwapCached: 0 kB' 'Active: 494968 kB' 'Inactive: 2753244 kB' 'Active(anon): 125816 kB' 'Inactive(anon): 0 kB' 'Active(file): 369152 kB' 'Inactive(file): 2753244 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'FilePages: 3132884 kB' 'Mapped: 49916 kB' 'AnonPages: 117028 kB' 'Shmem: 10488 kB' 'KernelStack: 6656 kB' 'PageTables: 3912 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 88272 kB' 'Slab: 191244 kB' 'SReclaimable: 88272 kB' 'SUnreclaim: 102972 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:05:19.009 14:45:52 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.009 14:45:52 -- setup/common.sh@32 -- # continue 00:05:19.009 14:45:52 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.009 14:45:52 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.009 14:45:52 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.009 14:45:52 -- setup/common.sh@32 -- # continue 00:05:19.009 14:45:52 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.009 14:45:52 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.009 14:45:52 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.009 14:45:52 -- setup/common.sh@32 -- # continue 00:05:19.009 14:45:52 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.009 14:45:52 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.009 14:45:52 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.009 14:45:52 -- setup/common.sh@32 -- # continue 00:05:19.009 14:45:52 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.009 14:45:52 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.009 14:45:52 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.009 14:45:52 -- setup/common.sh@32 -- # continue 00:05:19.009 14:45:52 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.009 14:45:52 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.009 14:45:52 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.009 14:45:52 -- setup/common.sh@32 -- # continue 00:05:19.009 14:45:52 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.009 14:45:52 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.009 14:45:52 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.009 14:45:52 -- setup/common.sh@32 -- # continue 00:05:19.009 14:45:52 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.009 14:45:52 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.009 14:45:52 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.009 14:45:52 -- setup/common.sh@32 -- # continue 00:05:19.009 14:45:52 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.009 14:45:52 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.009 14:45:52 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.009 14:45:52 -- setup/common.sh@32 -- # continue 00:05:19.009 14:45:52 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.009 14:45:52 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.009 14:45:52 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.009 14:45:52 -- setup/common.sh@32 -- # continue 00:05:19.009 14:45:52 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.009 14:45:52 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.009 14:45:52 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.009 14:45:52 -- setup/common.sh@32 -- # continue 00:05:19.009 14:45:52 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.009 14:45:52 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.009 14:45:52 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.009 14:45:52 -- setup/common.sh@32 -- # continue 00:05:19.009 14:45:52 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.009 14:45:52 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.009 14:45:52 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.009 14:45:52 -- setup/common.sh@32 -- # continue 00:05:19.009 14:45:52 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.009 14:45:52 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.009 14:45:52 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.009 14:45:52 -- setup/common.sh@32 -- # continue 00:05:19.009 14:45:52 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.009 14:45:52 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.009 14:45:52 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.009 14:45:52 -- setup/common.sh@32 -- # continue 00:05:19.009 14:45:52 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.009 14:45:52 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.009 14:45:52 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.010 14:45:52 -- setup/common.sh@32 -- # continue 00:05:19.010 14:45:52 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.010 14:45:52 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.010 14:45:52 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.010 14:45:52 -- setup/common.sh@32 -- # continue 00:05:19.010 14:45:52 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.010 14:45:52 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.010 14:45:52 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.010 14:45:52 -- setup/common.sh@32 -- # continue 00:05:19.010 14:45:52 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.010 14:45:52 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.010 14:45:52 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.010 14:45:52 -- setup/common.sh@32 -- # continue 00:05:19.010 14:45:52 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.010 14:45:52 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.010 14:45:52 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.010 14:45:52 -- setup/common.sh@32 -- # continue 00:05:19.010 14:45:52 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.010 14:45:52 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.010 14:45:52 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.010 14:45:52 -- setup/common.sh@32 -- # continue 00:05:19.010 14:45:52 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.010 14:45:52 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.010 14:45:52 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.010 14:45:52 -- setup/common.sh@32 -- # continue 00:05:19.010 14:45:52 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.010 14:45:52 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.010 14:45:52 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.010 14:45:52 -- setup/common.sh@32 -- # continue 00:05:19.010 14:45:52 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.010 14:45:52 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.010 14:45:52 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.010 14:45:52 -- setup/common.sh@32 -- # continue 00:05:19.010 14:45:52 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.010 14:45:52 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.010 14:45:52 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.010 14:45:52 -- setup/common.sh@32 -- # continue 00:05:19.010 14:45:52 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.010 14:45:52 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.010 14:45:52 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.010 14:45:52 -- setup/common.sh@32 -- # continue 00:05:19.010 14:45:52 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.010 14:45:52 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.010 14:45:52 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.010 14:45:52 -- setup/common.sh@32 -- # continue 00:05:19.010 14:45:52 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.010 14:45:52 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.010 14:45:52 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.010 14:45:52 -- setup/common.sh@32 -- # continue 00:05:19.010 14:45:52 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.010 14:45:52 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.010 14:45:52 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.010 14:45:52 -- setup/common.sh@32 -- # continue 00:05:19.010 14:45:52 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.010 14:45:52 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.010 14:45:52 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.010 14:45:52 -- setup/common.sh@32 -- # continue 00:05:19.010 14:45:52 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.010 14:45:52 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.010 14:45:52 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.010 14:45:52 -- setup/common.sh@32 -- # continue 00:05:19.010 14:45:52 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.010 14:45:52 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.010 14:45:52 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.010 14:45:52 -- setup/common.sh@32 -- # continue 00:05:19.010 14:45:52 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.010 14:45:52 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.010 14:45:52 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.010 14:45:52 -- setup/common.sh@32 -- # continue 00:05:19.010 14:45:52 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.010 14:45:52 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.010 14:45:52 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.010 14:45:52 -- setup/common.sh@32 -- # continue 00:05:19.010 14:45:52 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.010 14:45:52 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.010 14:45:52 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.010 14:45:52 -- setup/common.sh@32 -- # continue 00:05:19.010 14:45:52 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.010 14:45:52 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.010 14:45:52 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.010 14:45:52 -- setup/common.sh@32 -- # continue 00:05:19.010 14:45:52 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.010 14:45:52 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.010 14:45:52 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.010 14:45:52 -- setup/common.sh@33 -- # echo 0 00:05:19.010 14:45:52 -- setup/common.sh@33 -- # return 0 00:05:19.010 14:45:52 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:05:19.010 14:45:52 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:05:19.010 14:45:52 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:05:19.010 14:45:52 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:05:19.010 node0=1024 expecting 1024 00:05:19.010 14:45:52 -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:05:19.010 14:45:52 -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:05:19.010 00:05:19.010 real 0m1.181s 00:05:19.010 user 0m0.555s 00:05:19.010 sys 0m0.678s 00:05:19.010 14:45:52 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:19.010 14:45:52 -- common/autotest_common.sh@10 -- # set +x 00:05:19.010 ************************************ 00:05:19.010 END TEST no_shrink_alloc 00:05:19.010 ************************************ 00:05:19.269 14:45:52 -- setup/hugepages.sh@217 -- # clear_hp 00:05:19.269 14:45:52 -- setup/hugepages.sh@37 -- # local node hp 00:05:19.269 14:45:52 -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:05:19.269 14:45:52 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:05:19.269 14:45:52 -- setup/hugepages.sh@41 -- # echo 0 00:05:19.269 14:45:52 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:05:19.269 14:45:52 -- setup/hugepages.sh@41 -- # echo 0 00:05:19.269 14:45:52 -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:05:19.269 14:45:52 -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:05:19.269 00:05:19.269 real 0m5.272s 00:05:19.269 user 0m2.482s 00:05:19.269 sys 0m2.864s 00:05:19.269 14:45:52 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:19.269 14:45:52 -- common/autotest_common.sh@10 -- # set +x 00:05:19.269 ************************************ 00:05:19.269 END TEST hugepages 00:05:19.269 ************************************ 00:05:19.269 14:45:52 -- setup/test-setup.sh@14 -- # run_test driver /home/vagrant/spdk_repo/spdk/test/setup/driver.sh 00:05:19.269 14:45:52 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:19.269 14:45:52 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:19.269 14:45:52 -- common/autotest_common.sh@10 -- # set +x 00:05:19.269 ************************************ 00:05:19.269 START TEST driver 00:05:19.269 ************************************ 00:05:19.269 14:45:52 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/setup/driver.sh 00:05:19.269 * Looking for test storage... 00:05:19.269 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:05:19.269 14:45:52 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:05:19.269 14:45:52 -- common/autotest_common.sh@1690 -- # lcov --version 00:05:19.269 14:45:52 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:05:19.528 14:45:52 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:05:19.528 14:45:52 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:05:19.528 14:45:52 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:05:19.528 14:45:52 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:05:19.528 14:45:52 -- scripts/common.sh@335 -- # IFS=.-: 00:05:19.528 14:45:52 -- scripts/common.sh@335 -- # read -ra ver1 00:05:19.528 14:45:52 -- scripts/common.sh@336 -- # IFS=.-: 00:05:19.528 14:45:52 -- scripts/common.sh@336 -- # read -ra ver2 00:05:19.528 14:45:52 -- scripts/common.sh@337 -- # local 'op=<' 00:05:19.528 14:45:52 -- scripts/common.sh@339 -- # ver1_l=2 00:05:19.528 14:45:52 -- scripts/common.sh@340 -- # ver2_l=1 00:05:19.528 14:45:52 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:05:19.528 14:45:52 -- scripts/common.sh@343 -- # case "$op" in 00:05:19.528 14:45:52 -- scripts/common.sh@344 -- # : 1 00:05:19.528 14:45:52 -- scripts/common.sh@363 -- # (( v = 0 )) 00:05:19.528 14:45:52 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:19.528 14:45:52 -- scripts/common.sh@364 -- # decimal 1 00:05:19.528 14:45:52 -- scripts/common.sh@352 -- # local d=1 00:05:19.528 14:45:52 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:19.528 14:45:52 -- scripts/common.sh@354 -- # echo 1 00:05:19.528 14:45:52 -- scripts/common.sh@364 -- # ver1[v]=1 00:05:19.528 14:45:52 -- scripts/common.sh@365 -- # decimal 2 00:05:19.528 14:45:52 -- scripts/common.sh@352 -- # local d=2 00:05:19.528 14:45:52 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:19.528 14:45:52 -- scripts/common.sh@354 -- # echo 2 00:05:19.528 14:45:52 -- scripts/common.sh@365 -- # ver2[v]=2 00:05:19.528 14:45:52 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:05:19.528 14:45:52 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:05:19.528 14:45:52 -- scripts/common.sh@367 -- # return 0 00:05:19.528 14:45:52 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:19.528 14:45:52 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:05:19.528 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:19.528 --rc genhtml_branch_coverage=1 00:05:19.528 --rc genhtml_function_coverage=1 00:05:19.528 --rc genhtml_legend=1 00:05:19.528 --rc geninfo_all_blocks=1 00:05:19.528 --rc geninfo_unexecuted_blocks=1 00:05:19.528 00:05:19.528 ' 00:05:19.528 14:45:52 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:05:19.528 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:19.528 --rc genhtml_branch_coverage=1 00:05:19.528 --rc genhtml_function_coverage=1 00:05:19.528 --rc genhtml_legend=1 00:05:19.528 --rc geninfo_all_blocks=1 00:05:19.528 --rc geninfo_unexecuted_blocks=1 00:05:19.528 00:05:19.528 ' 00:05:19.528 14:45:52 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:05:19.528 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:19.528 --rc genhtml_branch_coverage=1 00:05:19.528 --rc genhtml_function_coverage=1 00:05:19.528 --rc genhtml_legend=1 00:05:19.528 --rc geninfo_all_blocks=1 00:05:19.528 --rc geninfo_unexecuted_blocks=1 00:05:19.528 00:05:19.528 ' 00:05:19.528 14:45:52 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:05:19.528 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:19.528 --rc genhtml_branch_coverage=1 00:05:19.528 --rc genhtml_function_coverage=1 00:05:19.528 --rc genhtml_legend=1 00:05:19.528 --rc geninfo_all_blocks=1 00:05:19.528 --rc geninfo_unexecuted_blocks=1 00:05:19.528 00:05:19.528 ' 00:05:19.528 14:45:52 -- setup/driver.sh@68 -- # setup reset 00:05:19.528 14:45:52 -- setup/common.sh@9 -- # [[ reset == output ]] 00:05:19.528 14:45:52 -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:05:20.095 14:45:52 -- setup/driver.sh@69 -- # run_test guess_driver guess_driver 00:05:20.095 14:45:52 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:20.095 14:45:52 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:20.095 14:45:52 -- common/autotest_common.sh@10 -- # set +x 00:05:20.095 ************************************ 00:05:20.095 START TEST guess_driver 00:05:20.095 ************************************ 00:05:20.095 14:45:53 -- common/autotest_common.sh@1114 -- # guess_driver 00:05:20.095 14:45:53 -- setup/driver.sh@46 -- # local driver setup_driver marker 00:05:20.095 14:45:53 -- setup/driver.sh@47 -- # local fail=0 00:05:20.095 14:45:53 -- setup/driver.sh@49 -- # pick_driver 00:05:20.095 14:45:53 -- setup/driver.sh@36 -- # vfio 00:05:20.095 14:45:53 -- setup/driver.sh@21 -- # local iommu_grups 00:05:20.095 14:45:53 -- setup/driver.sh@22 -- # local unsafe_vfio 00:05:20.095 14:45:53 -- setup/driver.sh@24 -- # [[ -e /sys/module/vfio/parameters/enable_unsafe_noiommu_mode ]] 00:05:20.095 14:45:53 -- setup/driver.sh@27 -- # iommu_groups=(/sys/kernel/iommu_groups/*) 00:05:20.095 14:45:53 -- setup/driver.sh@29 -- # (( 0 > 0 )) 00:05:20.095 14:45:53 -- setup/driver.sh@29 -- # [[ '' == Y ]] 00:05:20.095 14:45:53 -- setup/driver.sh@32 -- # return 1 00:05:20.095 14:45:53 -- setup/driver.sh@38 -- # uio 00:05:20.095 14:45:53 -- setup/driver.sh@17 -- # is_driver uio_pci_generic 00:05:20.095 14:45:53 -- setup/driver.sh@14 -- # mod uio_pci_generic 00:05:20.095 14:45:53 -- setup/driver.sh@12 -- # dep uio_pci_generic 00:05:20.095 14:45:53 -- setup/driver.sh@11 -- # modprobe --show-depends uio_pci_generic 00:05:20.095 14:45:53 -- setup/driver.sh@12 -- # [[ insmod /lib/modules/6.8.9-200.fc39.x86_64/kernel/drivers/uio/uio.ko.xz 00:05:20.095 insmod /lib/modules/6.8.9-200.fc39.x86_64/kernel/drivers/uio/uio_pci_generic.ko.xz == *\.\k\o* ]] 00:05:20.095 14:45:53 -- setup/driver.sh@39 -- # echo uio_pci_generic 00:05:20.095 Looking for driver=uio_pci_generic 00:05:20.096 14:45:53 -- setup/driver.sh@49 -- # driver=uio_pci_generic 00:05:20.096 14:45:53 -- setup/driver.sh@51 -- # [[ uio_pci_generic == \N\o\ \v\a\l\i\d\ \d\r\i\v\e\r\ \f\o\u\n\d ]] 00:05:20.096 14:45:53 -- setup/driver.sh@56 -- # echo 'Looking for driver=uio_pci_generic' 00:05:20.096 14:45:53 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:20.096 14:45:53 -- setup/driver.sh@45 -- # setup output config 00:05:20.096 14:45:53 -- setup/common.sh@9 -- # [[ output == output ]] 00:05:20.096 14:45:53 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:05:20.662 14:45:53 -- setup/driver.sh@58 -- # [[ devices: == \-\> ]] 00:05:20.662 14:45:53 -- setup/driver.sh@58 -- # continue 00:05:20.662 14:45:53 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:20.920 14:45:53 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:20.920 14:45:53 -- setup/driver.sh@61 -- # [[ uio_pci_generic == uio_pci_generic ]] 00:05:20.920 14:45:53 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:20.920 14:45:53 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:20.920 14:45:53 -- setup/driver.sh@61 -- # [[ uio_pci_generic == uio_pci_generic ]] 00:05:20.920 14:45:53 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:20.920 14:45:53 -- setup/driver.sh@64 -- # (( fail == 0 )) 00:05:20.920 14:45:53 -- setup/driver.sh@65 -- # setup reset 00:05:20.920 14:45:53 -- setup/common.sh@9 -- # [[ reset == output ]] 00:05:20.920 14:45:53 -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:05:21.487 00:05:21.487 real 0m1.556s 00:05:21.487 user 0m0.604s 00:05:21.487 sys 0m0.954s 00:05:21.487 14:45:54 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:21.487 14:45:54 -- common/autotest_common.sh@10 -- # set +x 00:05:21.487 ************************************ 00:05:21.487 END TEST guess_driver 00:05:21.487 ************************************ 00:05:21.487 00:05:21.487 real 0m2.389s 00:05:21.487 user 0m0.933s 00:05:21.487 sys 0m1.520s 00:05:21.487 14:45:54 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:21.487 14:45:54 -- common/autotest_common.sh@10 -- # set +x 00:05:21.487 ************************************ 00:05:21.487 END TEST driver 00:05:21.487 ************************************ 00:05:21.745 14:45:54 -- setup/test-setup.sh@15 -- # run_test devices /home/vagrant/spdk_repo/spdk/test/setup/devices.sh 00:05:21.745 14:45:54 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:21.745 14:45:54 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:21.745 14:45:54 -- common/autotest_common.sh@10 -- # set +x 00:05:21.745 ************************************ 00:05:21.745 START TEST devices 00:05:21.745 ************************************ 00:05:21.745 14:45:54 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/setup/devices.sh 00:05:21.745 * Looking for test storage... 00:05:21.745 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:05:21.745 14:45:54 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:05:21.745 14:45:54 -- common/autotest_common.sh@1690 -- # lcov --version 00:05:21.745 14:45:54 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:05:21.745 14:45:54 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:05:21.745 14:45:54 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:05:21.745 14:45:54 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:05:21.745 14:45:54 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:05:21.745 14:45:54 -- scripts/common.sh@335 -- # IFS=.-: 00:05:21.745 14:45:54 -- scripts/common.sh@335 -- # read -ra ver1 00:05:21.746 14:45:54 -- scripts/common.sh@336 -- # IFS=.-: 00:05:21.746 14:45:54 -- scripts/common.sh@336 -- # read -ra ver2 00:05:21.746 14:45:54 -- scripts/common.sh@337 -- # local 'op=<' 00:05:21.746 14:45:54 -- scripts/common.sh@339 -- # ver1_l=2 00:05:21.746 14:45:54 -- scripts/common.sh@340 -- # ver2_l=1 00:05:21.746 14:45:54 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:05:21.746 14:45:54 -- scripts/common.sh@343 -- # case "$op" in 00:05:21.746 14:45:54 -- scripts/common.sh@344 -- # : 1 00:05:21.746 14:45:54 -- scripts/common.sh@363 -- # (( v = 0 )) 00:05:21.746 14:45:54 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:21.746 14:45:54 -- scripts/common.sh@364 -- # decimal 1 00:05:21.746 14:45:54 -- scripts/common.sh@352 -- # local d=1 00:05:21.746 14:45:54 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:21.746 14:45:54 -- scripts/common.sh@354 -- # echo 1 00:05:21.746 14:45:54 -- scripts/common.sh@364 -- # ver1[v]=1 00:05:21.746 14:45:54 -- scripts/common.sh@365 -- # decimal 2 00:05:21.746 14:45:54 -- scripts/common.sh@352 -- # local d=2 00:05:21.746 14:45:54 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:21.746 14:45:54 -- scripts/common.sh@354 -- # echo 2 00:05:21.746 14:45:54 -- scripts/common.sh@365 -- # ver2[v]=2 00:05:21.746 14:45:54 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:05:21.746 14:45:54 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:05:21.746 14:45:54 -- scripts/common.sh@367 -- # return 0 00:05:21.746 14:45:54 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:21.746 14:45:54 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:05:21.746 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:21.746 --rc genhtml_branch_coverage=1 00:05:21.746 --rc genhtml_function_coverage=1 00:05:21.746 --rc genhtml_legend=1 00:05:21.746 --rc geninfo_all_blocks=1 00:05:21.746 --rc geninfo_unexecuted_blocks=1 00:05:21.746 00:05:21.746 ' 00:05:21.746 14:45:54 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:05:21.746 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:21.746 --rc genhtml_branch_coverage=1 00:05:21.746 --rc genhtml_function_coverage=1 00:05:21.746 --rc genhtml_legend=1 00:05:21.746 --rc geninfo_all_blocks=1 00:05:21.746 --rc geninfo_unexecuted_blocks=1 00:05:21.746 00:05:21.746 ' 00:05:21.746 14:45:54 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:05:21.746 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:21.746 --rc genhtml_branch_coverage=1 00:05:21.746 --rc genhtml_function_coverage=1 00:05:21.746 --rc genhtml_legend=1 00:05:21.746 --rc geninfo_all_blocks=1 00:05:21.746 --rc geninfo_unexecuted_blocks=1 00:05:21.746 00:05:21.746 ' 00:05:21.746 14:45:54 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:05:21.746 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:21.746 --rc genhtml_branch_coverage=1 00:05:21.746 --rc genhtml_function_coverage=1 00:05:21.746 --rc genhtml_legend=1 00:05:21.746 --rc geninfo_all_blocks=1 00:05:21.746 --rc geninfo_unexecuted_blocks=1 00:05:21.746 00:05:21.746 ' 00:05:21.746 14:45:54 -- setup/devices.sh@190 -- # trap cleanup EXIT 00:05:21.746 14:45:54 -- setup/devices.sh@192 -- # setup reset 00:05:21.746 14:45:54 -- setup/common.sh@9 -- # [[ reset == output ]] 00:05:21.746 14:45:54 -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:05:22.679 14:45:55 -- setup/devices.sh@194 -- # get_zoned_devs 00:05:22.679 14:45:55 -- common/autotest_common.sh@1664 -- # zoned_devs=() 00:05:22.679 14:45:55 -- common/autotest_common.sh@1664 -- # local -gA zoned_devs 00:05:22.679 14:45:55 -- common/autotest_common.sh@1665 -- # local nvme bdf 00:05:22.679 14:45:55 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:05:22.679 14:45:55 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme0n1 00:05:22.679 14:45:55 -- common/autotest_common.sh@1657 -- # local device=nvme0n1 00:05:22.679 14:45:55 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:05:22.679 14:45:55 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:05:22.679 14:45:55 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:05:22.679 14:45:55 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme1n1 00:05:22.679 14:45:55 -- common/autotest_common.sh@1657 -- # local device=nvme1n1 00:05:22.679 14:45:55 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:05:22.679 14:45:55 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:05:22.679 14:45:55 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:05:22.679 14:45:55 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme1n2 00:05:22.679 14:45:55 -- common/autotest_common.sh@1657 -- # local device=nvme1n2 00:05:22.679 14:45:55 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme1n2/queue/zoned ]] 00:05:22.679 14:45:55 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:05:22.679 14:45:55 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:05:22.679 14:45:55 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme1n3 00:05:22.679 14:45:55 -- common/autotest_common.sh@1657 -- # local device=nvme1n3 00:05:22.679 14:45:55 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme1n3/queue/zoned ]] 00:05:22.679 14:45:55 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:05:22.679 14:45:55 -- setup/devices.sh@196 -- # blocks=() 00:05:22.679 14:45:55 -- setup/devices.sh@196 -- # declare -a blocks 00:05:22.679 14:45:55 -- setup/devices.sh@197 -- # blocks_to_pci=() 00:05:22.679 14:45:55 -- setup/devices.sh@197 -- # declare -A blocks_to_pci 00:05:22.679 14:45:55 -- setup/devices.sh@198 -- # min_disk_size=3221225472 00:05:22.679 14:45:55 -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:05:22.679 14:45:55 -- setup/devices.sh@201 -- # ctrl=nvme0n1 00:05:22.679 14:45:55 -- setup/devices.sh@201 -- # ctrl=nvme0 00:05:22.680 14:45:55 -- setup/devices.sh@202 -- # pci=0000:00:06.0 00:05:22.680 14:45:55 -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\0\6\.\0* ]] 00:05:22.680 14:45:55 -- setup/devices.sh@204 -- # block_in_use nvme0n1 00:05:22.680 14:45:55 -- scripts/common.sh@380 -- # local block=nvme0n1 pt 00:05:22.680 14:45:55 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1 00:05:22.680 No valid GPT data, bailing 00:05:22.680 14:45:55 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:05:22.680 14:45:55 -- scripts/common.sh@393 -- # pt= 00:05:22.680 14:45:55 -- scripts/common.sh@394 -- # return 1 00:05:22.680 14:45:55 -- setup/devices.sh@204 -- # sec_size_to_bytes nvme0n1 00:05:22.680 14:45:55 -- setup/common.sh@76 -- # local dev=nvme0n1 00:05:22.680 14:45:55 -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:05:22.680 14:45:55 -- setup/common.sh@80 -- # echo 5368709120 00:05:22.680 14:45:55 -- setup/devices.sh@204 -- # (( 5368709120 >= min_disk_size )) 00:05:22.680 14:45:55 -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:05:22.680 14:45:55 -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:06.0 00:05:22.680 14:45:55 -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:05:22.680 14:45:55 -- setup/devices.sh@201 -- # ctrl=nvme1n1 00:05:22.680 14:45:55 -- setup/devices.sh@201 -- # ctrl=nvme1 00:05:22.680 14:45:55 -- setup/devices.sh@202 -- # pci=0000:00:07.0 00:05:22.680 14:45:55 -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\0\7\.\0* ]] 00:05:22.680 14:45:55 -- setup/devices.sh@204 -- # block_in_use nvme1n1 00:05:22.680 14:45:55 -- scripts/common.sh@380 -- # local block=nvme1n1 pt 00:05:22.680 14:45:55 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n1 00:05:22.680 No valid GPT data, bailing 00:05:22.680 14:45:55 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:05:22.939 14:45:55 -- scripts/common.sh@393 -- # pt= 00:05:22.939 14:45:55 -- scripts/common.sh@394 -- # return 1 00:05:22.939 14:45:55 -- setup/devices.sh@204 -- # sec_size_to_bytes nvme1n1 00:05:22.939 14:45:55 -- setup/common.sh@76 -- # local dev=nvme1n1 00:05:22.939 14:45:55 -- setup/common.sh@78 -- # [[ -e /sys/block/nvme1n1 ]] 00:05:22.939 14:45:55 -- setup/common.sh@80 -- # echo 4294967296 00:05:22.939 14:45:55 -- setup/devices.sh@204 -- # (( 4294967296 >= min_disk_size )) 00:05:22.939 14:45:55 -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:05:22.939 14:45:55 -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:07.0 00:05:22.939 14:45:55 -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:05:22.939 14:45:55 -- setup/devices.sh@201 -- # ctrl=nvme1n2 00:05:22.939 14:45:55 -- setup/devices.sh@201 -- # ctrl=nvme1 00:05:22.939 14:45:55 -- setup/devices.sh@202 -- # pci=0000:00:07.0 00:05:22.939 14:45:55 -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\0\7\.\0* ]] 00:05:22.939 14:45:55 -- setup/devices.sh@204 -- # block_in_use nvme1n2 00:05:22.939 14:45:55 -- scripts/common.sh@380 -- # local block=nvme1n2 pt 00:05:22.939 14:45:55 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n2 00:05:22.939 No valid GPT data, bailing 00:05:22.939 14:45:55 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme1n2 00:05:22.939 14:45:55 -- scripts/common.sh@393 -- # pt= 00:05:22.939 14:45:55 -- scripts/common.sh@394 -- # return 1 00:05:22.939 14:45:55 -- setup/devices.sh@204 -- # sec_size_to_bytes nvme1n2 00:05:22.939 14:45:55 -- setup/common.sh@76 -- # local dev=nvme1n2 00:05:22.939 14:45:55 -- setup/common.sh@78 -- # [[ -e /sys/block/nvme1n2 ]] 00:05:22.939 14:45:55 -- setup/common.sh@80 -- # echo 4294967296 00:05:22.939 14:45:55 -- setup/devices.sh@204 -- # (( 4294967296 >= min_disk_size )) 00:05:22.939 14:45:55 -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:05:22.939 14:45:55 -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:07.0 00:05:22.939 14:45:55 -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:05:22.939 14:45:55 -- setup/devices.sh@201 -- # ctrl=nvme1n3 00:05:22.939 14:45:55 -- setup/devices.sh@201 -- # ctrl=nvme1 00:05:22.939 14:45:55 -- setup/devices.sh@202 -- # pci=0000:00:07.0 00:05:22.939 14:45:55 -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\0\7\.\0* ]] 00:05:22.939 14:45:55 -- setup/devices.sh@204 -- # block_in_use nvme1n3 00:05:22.939 14:45:55 -- scripts/common.sh@380 -- # local block=nvme1n3 pt 00:05:22.939 14:45:55 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n3 00:05:22.939 No valid GPT data, bailing 00:05:22.939 14:45:55 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme1n3 00:05:22.939 14:45:55 -- scripts/common.sh@393 -- # pt= 00:05:22.939 14:45:55 -- scripts/common.sh@394 -- # return 1 00:05:22.939 14:45:55 -- setup/devices.sh@204 -- # sec_size_to_bytes nvme1n3 00:05:22.939 14:45:55 -- setup/common.sh@76 -- # local dev=nvme1n3 00:05:22.939 14:45:55 -- setup/common.sh@78 -- # [[ -e /sys/block/nvme1n3 ]] 00:05:22.939 14:45:55 -- setup/common.sh@80 -- # echo 4294967296 00:05:22.939 14:45:55 -- setup/devices.sh@204 -- # (( 4294967296 >= min_disk_size )) 00:05:22.939 14:45:55 -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:05:22.939 14:45:55 -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:07.0 00:05:22.939 14:45:55 -- setup/devices.sh@209 -- # (( 4 > 0 )) 00:05:22.939 14:45:55 -- setup/devices.sh@211 -- # declare -r test_disk=nvme0n1 00:05:22.939 14:45:55 -- setup/devices.sh@213 -- # run_test nvme_mount nvme_mount 00:05:22.939 14:45:55 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:22.939 14:45:55 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:22.939 14:45:55 -- common/autotest_common.sh@10 -- # set +x 00:05:22.939 ************************************ 00:05:22.939 START TEST nvme_mount 00:05:22.939 ************************************ 00:05:22.939 14:45:55 -- common/autotest_common.sh@1114 -- # nvme_mount 00:05:22.939 14:45:55 -- setup/devices.sh@95 -- # nvme_disk=nvme0n1 00:05:22.939 14:45:55 -- setup/devices.sh@96 -- # nvme_disk_p=nvme0n1p1 00:05:22.939 14:45:55 -- setup/devices.sh@97 -- # nvme_mount=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:22.939 14:45:55 -- setup/devices.sh@98 -- # nvme_dummy_test_file=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:05:22.939 14:45:55 -- setup/devices.sh@101 -- # partition_drive nvme0n1 1 00:05:22.939 14:45:55 -- setup/common.sh@39 -- # local disk=nvme0n1 00:05:22.939 14:45:55 -- setup/common.sh@40 -- # local part_no=1 00:05:22.939 14:45:55 -- setup/common.sh@41 -- # local size=1073741824 00:05:22.939 14:45:55 -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:05:22.939 14:45:55 -- setup/common.sh@44 -- # parts=() 00:05:22.939 14:45:55 -- setup/common.sh@44 -- # local parts 00:05:22.939 14:45:55 -- setup/common.sh@46 -- # (( part = 1 )) 00:05:22.939 14:45:55 -- setup/common.sh@46 -- # (( part <= part_no )) 00:05:22.939 14:45:55 -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:05:22.939 14:45:55 -- setup/common.sh@46 -- # (( part++ )) 00:05:22.939 14:45:55 -- setup/common.sh@46 -- # (( part <= part_no )) 00:05:22.939 14:45:55 -- setup/common.sh@51 -- # (( size /= 4096 )) 00:05:22.939 14:45:55 -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:05:22.939 14:45:55 -- setup/common.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 00:05:24.316 Creating new GPT entries in memory. 00:05:24.316 GPT data structures destroyed! You may now partition the disk using fdisk or 00:05:24.316 other utilities. 00:05:24.316 14:45:57 -- setup/common.sh@57 -- # (( part = 1 )) 00:05:24.316 14:45:57 -- setup/common.sh@57 -- # (( part <= part_no )) 00:05:24.316 14:45:57 -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:05:24.316 14:45:57 -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:05:24.316 14:45:57 -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:264191 00:05:25.254 Creating new GPT entries in memory. 00:05:25.254 The operation has completed successfully. 00:05:25.254 14:45:58 -- setup/common.sh@57 -- # (( part++ )) 00:05:25.254 14:45:58 -- setup/common.sh@57 -- # (( part <= part_no )) 00:05:25.254 14:45:58 -- setup/common.sh@62 -- # wait 65881 00:05:25.254 14:45:58 -- setup/devices.sh@102 -- # mkfs /dev/nvme0n1p1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:25.254 14:45:58 -- setup/common.sh@66 -- # local dev=/dev/nvme0n1p1 mount=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount size= 00:05:25.254 14:45:58 -- setup/common.sh@68 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:25.254 14:45:58 -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1p1 ]] 00:05:25.254 14:45:58 -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1p1 00:05:25.254 14:45:58 -- setup/common.sh@72 -- # mount /dev/nvme0n1p1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:25.254 14:45:58 -- setup/devices.sh@105 -- # verify 0000:00:06.0 nvme0n1:nvme0n1p1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:05:25.254 14:45:58 -- setup/devices.sh@48 -- # local dev=0000:00:06.0 00:05:25.254 14:45:58 -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1p1 00:05:25.254 14:45:58 -- setup/devices.sh@50 -- # local mount_point=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:25.254 14:45:58 -- setup/devices.sh@51 -- # local test_file=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:05:25.254 14:45:58 -- setup/devices.sh@53 -- # local found=0 00:05:25.254 14:45:58 -- setup/devices.sh@55 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:05:25.254 14:45:58 -- setup/devices.sh@56 -- # : 00:05:25.254 14:45:58 -- setup/devices.sh@59 -- # local pci status 00:05:25.254 14:45:58 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:06.0 00:05:25.254 14:45:58 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:25.254 14:45:58 -- setup/devices.sh@47 -- # setup output config 00:05:25.254 14:45:58 -- setup/common.sh@9 -- # [[ output == output ]] 00:05:25.254 14:45:58 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:05:25.254 14:45:58 -- setup/devices.sh@62 -- # [[ 0000:00:06.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:05:25.254 14:45:58 -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1p1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1\p\1* ]] 00:05:25.254 14:45:58 -- setup/devices.sh@63 -- # found=1 00:05:25.254 14:45:58 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:25.254 14:45:58 -- setup/devices.sh@62 -- # [[ 0000:00:07.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:05:25.254 14:45:58 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:25.823 14:45:58 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:05:25.823 14:45:58 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:25.823 14:45:58 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:05:25.823 14:45:58 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:25.823 14:45:58 -- setup/devices.sh@66 -- # (( found == 1 )) 00:05:25.823 14:45:58 -- setup/devices.sh@68 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount ]] 00:05:25.823 14:45:58 -- setup/devices.sh@71 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:25.823 14:45:58 -- setup/devices.sh@73 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:05:25.823 14:45:58 -- setup/devices.sh@74 -- # rm /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:05:25.823 14:45:58 -- setup/devices.sh@110 -- # cleanup_nvme 00:05:25.823 14:45:58 -- setup/devices.sh@20 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:25.823 14:45:58 -- setup/devices.sh@21 -- # umount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:25.823 14:45:58 -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:05:25.823 14:45:58 -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:05:25.823 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:05:25.823 14:45:58 -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:05:25.823 14:45:58 -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:05:26.082 /dev/nvme0n1: 8 bytes were erased at offset 0x00001000 (gpt): 45 46 49 20 50 41 52 54 00:05:26.082 /dev/nvme0n1: 8 bytes were erased at offset 0x13ffff000 (gpt): 45 46 49 20 50 41 52 54 00:05:26.082 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:05:26.082 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:05:26.082 14:45:59 -- setup/devices.sh@113 -- # mkfs /dev/nvme0n1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 1024M 00:05:26.082 14:45:59 -- setup/common.sh@66 -- # local dev=/dev/nvme0n1 mount=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount size=1024M 00:05:26.082 14:45:59 -- setup/common.sh@68 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:26.082 14:45:59 -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1 ]] 00:05:26.082 14:45:59 -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1 1024M 00:05:26.082 14:45:59 -- setup/common.sh@72 -- # mount /dev/nvme0n1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:26.082 14:45:59 -- setup/devices.sh@116 -- # verify 0000:00:06.0 nvme0n1:nvme0n1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:05:26.082 14:45:59 -- setup/devices.sh@48 -- # local dev=0000:00:06.0 00:05:26.082 14:45:59 -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1 00:05:26.082 14:45:59 -- setup/devices.sh@50 -- # local mount_point=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:26.082 14:45:59 -- setup/devices.sh@51 -- # local test_file=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:05:26.082 14:45:59 -- setup/devices.sh@53 -- # local found=0 00:05:26.082 14:45:59 -- setup/devices.sh@55 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:05:26.082 14:45:59 -- setup/devices.sh@56 -- # : 00:05:26.082 14:45:59 -- setup/devices.sh@59 -- # local pci status 00:05:26.082 14:45:59 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:06.0 00:05:26.082 14:45:59 -- setup/devices.sh@47 -- # setup output config 00:05:26.082 14:45:59 -- setup/common.sh@9 -- # [[ output == output ]] 00:05:26.082 14:45:59 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:05:26.082 14:45:59 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:26.340 14:45:59 -- setup/devices.sh@62 -- # [[ 0000:00:06.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:05:26.340 14:45:59 -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1* ]] 00:05:26.340 14:45:59 -- setup/devices.sh@63 -- # found=1 00:05:26.340 14:45:59 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:26.340 14:45:59 -- setup/devices.sh@62 -- # [[ 0000:00:07.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:05:26.340 14:45:59 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:26.909 14:45:59 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:05:26.909 14:45:59 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:26.909 14:45:59 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:05:26.909 14:45:59 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:26.909 14:45:59 -- setup/devices.sh@66 -- # (( found == 1 )) 00:05:26.909 14:45:59 -- setup/devices.sh@68 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount ]] 00:05:26.909 14:45:59 -- setup/devices.sh@71 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:26.909 14:45:59 -- setup/devices.sh@73 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:05:26.909 14:45:59 -- setup/devices.sh@74 -- # rm /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:05:26.909 14:45:59 -- setup/devices.sh@123 -- # umount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:26.909 14:45:59 -- setup/devices.sh@125 -- # verify 0000:00:06.0 data@nvme0n1 '' '' 00:05:26.909 14:45:59 -- setup/devices.sh@48 -- # local dev=0000:00:06.0 00:05:26.909 14:45:59 -- setup/devices.sh@49 -- # local mounts=data@nvme0n1 00:05:26.909 14:45:59 -- setup/devices.sh@50 -- # local mount_point= 00:05:26.909 14:45:59 -- setup/devices.sh@51 -- # local test_file= 00:05:26.909 14:45:59 -- setup/devices.sh@53 -- # local found=0 00:05:26.909 14:45:59 -- setup/devices.sh@55 -- # [[ -n '' ]] 00:05:26.909 14:45:59 -- setup/devices.sh@59 -- # local pci status 00:05:26.909 14:45:59 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:26.909 14:45:59 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:06.0 00:05:26.909 14:45:59 -- setup/devices.sh@47 -- # setup output config 00:05:26.909 14:45:59 -- setup/common.sh@9 -- # [[ output == output ]] 00:05:26.909 14:45:59 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:05:27.168 14:46:00 -- setup/devices.sh@62 -- # [[ 0000:00:06.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:05:27.168 14:46:00 -- setup/devices.sh@62 -- # [[ Active devices: data@nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\d\a\t\a\@\n\v\m\e\0\n\1* ]] 00:05:27.168 14:46:00 -- setup/devices.sh@63 -- # found=1 00:05:27.168 14:46:00 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:27.168 14:46:00 -- setup/devices.sh@62 -- # [[ 0000:00:07.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:05:27.168 14:46:00 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:27.426 14:46:00 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:05:27.426 14:46:00 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:27.685 14:46:00 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:05:27.685 14:46:00 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:27.685 14:46:00 -- setup/devices.sh@66 -- # (( found == 1 )) 00:05:27.685 14:46:00 -- setup/devices.sh@68 -- # [[ -n '' ]] 00:05:27.685 14:46:00 -- setup/devices.sh@68 -- # return 0 00:05:27.685 14:46:00 -- setup/devices.sh@128 -- # cleanup_nvme 00:05:27.685 14:46:00 -- setup/devices.sh@20 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:27.685 14:46:00 -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:05:27.685 14:46:00 -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:05:27.685 14:46:00 -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:05:27.685 /dev/nvme0n1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:05:27.685 00:05:27.685 real 0m4.737s 00:05:27.685 user 0m1.096s 00:05:27.685 sys 0m1.322s 00:05:27.685 14:46:00 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:27.685 14:46:00 -- common/autotest_common.sh@10 -- # set +x 00:05:27.685 ************************************ 00:05:27.685 END TEST nvme_mount 00:05:27.685 ************************************ 00:05:27.685 14:46:00 -- setup/devices.sh@214 -- # run_test dm_mount dm_mount 00:05:27.685 14:46:00 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:27.685 14:46:00 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:27.685 14:46:00 -- common/autotest_common.sh@10 -- # set +x 00:05:27.685 ************************************ 00:05:27.685 START TEST dm_mount 00:05:27.685 ************************************ 00:05:27.685 14:46:00 -- common/autotest_common.sh@1114 -- # dm_mount 00:05:27.685 14:46:00 -- setup/devices.sh@144 -- # pv=nvme0n1 00:05:27.685 14:46:00 -- setup/devices.sh@145 -- # pv0=nvme0n1p1 00:05:27.685 14:46:00 -- setup/devices.sh@146 -- # pv1=nvme0n1p2 00:05:27.685 14:46:00 -- setup/devices.sh@148 -- # partition_drive nvme0n1 00:05:27.685 14:46:00 -- setup/common.sh@39 -- # local disk=nvme0n1 00:05:27.685 14:46:00 -- setup/common.sh@40 -- # local part_no=2 00:05:27.685 14:46:00 -- setup/common.sh@41 -- # local size=1073741824 00:05:27.685 14:46:00 -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:05:27.685 14:46:00 -- setup/common.sh@44 -- # parts=() 00:05:27.685 14:46:00 -- setup/common.sh@44 -- # local parts 00:05:27.685 14:46:00 -- setup/common.sh@46 -- # (( part = 1 )) 00:05:27.685 14:46:00 -- setup/common.sh@46 -- # (( part <= part_no )) 00:05:27.685 14:46:00 -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:05:27.685 14:46:00 -- setup/common.sh@46 -- # (( part++ )) 00:05:27.685 14:46:00 -- setup/common.sh@46 -- # (( part <= part_no )) 00:05:27.685 14:46:00 -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:05:27.685 14:46:00 -- setup/common.sh@46 -- # (( part++ )) 00:05:27.685 14:46:00 -- setup/common.sh@46 -- # (( part <= part_no )) 00:05:27.685 14:46:00 -- setup/common.sh@51 -- # (( size /= 4096 )) 00:05:27.686 14:46:00 -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:05:27.686 14:46:00 -- setup/common.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 nvme0n1p2 00:05:29.061 Creating new GPT entries in memory. 00:05:29.061 GPT data structures destroyed! You may now partition the disk using fdisk or 00:05:29.062 other utilities. 00:05:29.062 14:46:01 -- setup/common.sh@57 -- # (( part = 1 )) 00:05:29.062 14:46:01 -- setup/common.sh@57 -- # (( part <= part_no )) 00:05:29.062 14:46:01 -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:05:29.062 14:46:01 -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:05:29.062 14:46:01 -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:264191 00:05:29.999 Creating new GPT entries in memory. 00:05:29.999 The operation has completed successfully. 00:05:29.999 14:46:02 -- setup/common.sh@57 -- # (( part++ )) 00:05:29.999 14:46:02 -- setup/common.sh@57 -- # (( part <= part_no )) 00:05:29.999 14:46:02 -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:05:29.999 14:46:02 -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:05:29.999 14:46:02 -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=2:264192:526335 00:05:30.936 The operation has completed successfully. 00:05:30.936 14:46:03 -- setup/common.sh@57 -- # (( part++ )) 00:05:30.936 14:46:03 -- setup/common.sh@57 -- # (( part <= part_no )) 00:05:30.936 14:46:03 -- setup/common.sh@62 -- # wait 66346 00:05:30.936 14:46:03 -- setup/devices.sh@150 -- # dm_name=nvme_dm_test 00:05:30.936 14:46:03 -- setup/devices.sh@151 -- # dm_mount=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:05:30.936 14:46:03 -- setup/devices.sh@152 -- # dm_dummy_test_file=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:05:30.936 14:46:03 -- setup/devices.sh@155 -- # dmsetup create nvme_dm_test 00:05:30.936 14:46:03 -- setup/devices.sh@160 -- # for t in {1..5} 00:05:30.936 14:46:03 -- setup/devices.sh@161 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:05:30.936 14:46:03 -- setup/devices.sh@161 -- # break 00:05:30.936 14:46:03 -- setup/devices.sh@164 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:05:30.936 14:46:03 -- setup/devices.sh@165 -- # readlink -f /dev/mapper/nvme_dm_test 00:05:30.936 14:46:03 -- setup/devices.sh@165 -- # dm=/dev/dm-0 00:05:30.936 14:46:03 -- setup/devices.sh@166 -- # dm=dm-0 00:05:30.936 14:46:03 -- setup/devices.sh@168 -- # [[ -e /sys/class/block/nvme0n1p1/holders/dm-0 ]] 00:05:30.936 14:46:03 -- setup/devices.sh@169 -- # [[ -e /sys/class/block/nvme0n1p2/holders/dm-0 ]] 00:05:30.936 14:46:03 -- setup/devices.sh@171 -- # mkfs /dev/mapper/nvme_dm_test /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:05:30.936 14:46:03 -- setup/common.sh@66 -- # local dev=/dev/mapper/nvme_dm_test mount=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount size= 00:05:30.936 14:46:03 -- setup/common.sh@68 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:05:30.936 14:46:03 -- setup/common.sh@70 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:05:30.936 14:46:03 -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/mapper/nvme_dm_test 00:05:30.936 14:46:03 -- setup/common.sh@72 -- # mount /dev/mapper/nvme_dm_test /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:05:30.936 14:46:03 -- setup/devices.sh@174 -- # verify 0000:00:06.0 nvme0n1:nvme_dm_test /home/vagrant/spdk_repo/spdk/test/setup/dm_mount /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:05:30.936 14:46:03 -- setup/devices.sh@48 -- # local dev=0000:00:06.0 00:05:30.936 14:46:03 -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme_dm_test 00:05:30.936 14:46:03 -- setup/devices.sh@50 -- # local mount_point=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:05:30.936 14:46:03 -- setup/devices.sh@51 -- # local test_file=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:05:30.936 14:46:03 -- setup/devices.sh@53 -- # local found=0 00:05:30.936 14:46:03 -- setup/devices.sh@55 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm ]] 00:05:30.936 14:46:03 -- setup/devices.sh@56 -- # : 00:05:30.936 14:46:03 -- setup/devices.sh@59 -- # local pci status 00:05:30.936 14:46:03 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:30.936 14:46:03 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:06.0 00:05:30.936 14:46:03 -- setup/devices.sh@47 -- # setup output config 00:05:30.936 14:46:03 -- setup/common.sh@9 -- # [[ output == output ]] 00:05:30.936 14:46:03 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:05:31.195 14:46:04 -- setup/devices.sh@62 -- # [[ 0000:00:06.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:05:31.195 14:46:04 -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0,mount@nvme0n1:nvme_dm_test, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\_\d\m\_\t\e\s\t* ]] 00:05:31.195 14:46:04 -- setup/devices.sh@63 -- # found=1 00:05:31.195 14:46:04 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:31.195 14:46:04 -- setup/devices.sh@62 -- # [[ 0000:00:07.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:05:31.195 14:46:04 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:31.454 14:46:04 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:05:31.454 14:46:04 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:31.713 14:46:04 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:05:31.713 14:46:04 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:31.713 14:46:04 -- setup/devices.sh@66 -- # (( found == 1 )) 00:05:31.713 14:46:04 -- setup/devices.sh@68 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/dm_mount ]] 00:05:31.713 14:46:04 -- setup/devices.sh@71 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:05:31.713 14:46:04 -- setup/devices.sh@73 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm ]] 00:05:31.713 14:46:04 -- setup/devices.sh@74 -- # rm /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:05:31.713 14:46:04 -- setup/devices.sh@182 -- # umount /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:05:31.713 14:46:04 -- setup/devices.sh@184 -- # verify 0000:00:06.0 holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 '' '' 00:05:31.713 14:46:04 -- setup/devices.sh@48 -- # local dev=0000:00:06.0 00:05:31.713 14:46:04 -- setup/devices.sh@49 -- # local mounts=holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 00:05:31.713 14:46:04 -- setup/devices.sh@50 -- # local mount_point= 00:05:31.713 14:46:04 -- setup/devices.sh@51 -- # local test_file= 00:05:31.713 14:46:04 -- setup/devices.sh@53 -- # local found=0 00:05:31.713 14:46:04 -- setup/devices.sh@55 -- # [[ -n '' ]] 00:05:31.713 14:46:04 -- setup/devices.sh@59 -- # local pci status 00:05:31.713 14:46:04 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:31.713 14:46:04 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:06.0 00:05:31.713 14:46:04 -- setup/devices.sh@47 -- # setup output config 00:05:31.713 14:46:04 -- setup/common.sh@9 -- # [[ output == output ]] 00:05:31.713 14:46:04 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:05:31.972 14:46:04 -- setup/devices.sh@62 -- # [[ 0000:00:06.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:05:31.972 14:46:04 -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\1\:\d\m\-\0\,\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\2\:\d\m\-\0* ]] 00:05:31.972 14:46:04 -- setup/devices.sh@63 -- # found=1 00:05:31.972 14:46:04 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:31.972 14:46:04 -- setup/devices.sh@62 -- # [[ 0000:00:07.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:05:31.972 14:46:04 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:32.229 14:46:05 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:05:32.229 14:46:05 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:32.488 14:46:05 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:05:32.488 14:46:05 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:32.488 14:46:05 -- setup/devices.sh@66 -- # (( found == 1 )) 00:05:32.488 14:46:05 -- setup/devices.sh@68 -- # [[ -n '' ]] 00:05:32.488 14:46:05 -- setup/devices.sh@68 -- # return 0 00:05:32.488 14:46:05 -- setup/devices.sh@187 -- # cleanup_dm 00:05:32.488 14:46:05 -- setup/devices.sh@33 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:05:32.488 14:46:05 -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:05:32.488 14:46:05 -- setup/devices.sh@37 -- # dmsetup remove --force nvme_dm_test 00:05:32.488 14:46:05 -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:05:32.488 14:46:05 -- setup/devices.sh@40 -- # wipefs --all /dev/nvme0n1p1 00:05:32.488 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:05:32.488 14:46:05 -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:05:32.488 14:46:05 -- setup/devices.sh@43 -- # wipefs --all /dev/nvme0n1p2 00:05:32.488 00:05:32.488 real 0m4.739s 00:05:32.488 user 0m0.737s 00:05:32.488 sys 0m0.898s 00:05:32.488 14:46:05 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:32.488 ************************************ 00:05:32.488 END TEST dm_mount 00:05:32.488 ************************************ 00:05:32.488 14:46:05 -- common/autotest_common.sh@10 -- # set +x 00:05:32.488 14:46:05 -- setup/devices.sh@1 -- # cleanup 00:05:32.488 14:46:05 -- setup/devices.sh@11 -- # cleanup_nvme 00:05:32.488 14:46:05 -- setup/devices.sh@20 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:32.488 14:46:05 -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:05:32.489 14:46:05 -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:05:32.489 14:46:05 -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:05:32.489 14:46:05 -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:05:32.748 /dev/nvme0n1: 8 bytes were erased at offset 0x00001000 (gpt): 45 46 49 20 50 41 52 54 00:05:32.748 /dev/nvme0n1: 8 bytes were erased at offset 0x13ffff000 (gpt): 45 46 49 20 50 41 52 54 00:05:32.748 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:05:32.748 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:05:32.748 14:46:05 -- setup/devices.sh@12 -- # cleanup_dm 00:05:32.748 14:46:05 -- setup/devices.sh@33 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:05:32.748 14:46:05 -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:05:32.748 14:46:05 -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:05:32.748 14:46:05 -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:05:32.748 14:46:05 -- setup/devices.sh@14 -- # [[ -b /dev/nvme0n1 ]] 00:05:32.748 14:46:05 -- setup/devices.sh@15 -- # wipefs --all /dev/nvme0n1 00:05:32.748 ************************************ 00:05:32.748 END TEST devices 00:05:32.748 ************************************ 00:05:32.748 00:05:32.748 real 0m11.209s 00:05:32.748 user 0m2.582s 00:05:32.748 sys 0m2.916s 00:05:32.748 14:46:05 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:32.748 14:46:05 -- common/autotest_common.sh@10 -- # set +x 00:05:33.007 ************************************ 00:05:33.007 END TEST setup.sh 00:05:33.007 ************************************ 00:05:33.007 00:05:33.007 real 0m24.037s 00:05:33.007 user 0m8.257s 00:05:33.007 sys 0m10.179s 00:05:33.007 14:46:05 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:33.007 14:46:05 -- common/autotest_common.sh@10 -- # set +x 00:05:33.007 14:46:05 -- spdk/autotest.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:05:33.007 Hugepages 00:05:33.007 node hugesize free / total 00:05:33.007 node0 1048576kB 0 / 0 00:05:33.007 node0 2048kB 2048 / 2048 00:05:33.007 00:05:33.007 Type BDF Vendor Device NUMA Driver Device Block devices 00:05:33.265 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:05:33.265 NVMe 0000:00:06.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:05:33.265 NVMe 0000:00:07.0 1b36 0010 unknown nvme nvme1 nvme1n1 nvme1n2 nvme1n3 00:05:33.265 14:46:06 -- spdk/autotest.sh@128 -- # uname -s 00:05:33.265 14:46:06 -- spdk/autotest.sh@128 -- # [[ Linux == Linux ]] 00:05:33.265 14:46:06 -- spdk/autotest.sh@130 -- # nvme_namespace_revert 00:05:33.265 14:46:06 -- common/autotest_common.sh@1526 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:34.236 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:34.236 0000:00:06.0 (1b36 0010): nvme -> uio_pci_generic 00:05:34.236 0000:00:07.0 (1b36 0010): nvme -> uio_pci_generic 00:05:34.236 14:46:07 -- common/autotest_common.sh@1527 -- # sleep 1 00:05:35.614 14:46:08 -- common/autotest_common.sh@1528 -- # bdfs=() 00:05:35.614 14:46:08 -- common/autotest_common.sh@1528 -- # local bdfs 00:05:35.614 14:46:08 -- common/autotest_common.sh@1529 -- # bdfs=($(get_nvme_bdfs)) 00:05:35.614 14:46:08 -- common/autotest_common.sh@1529 -- # get_nvme_bdfs 00:05:35.614 14:46:08 -- common/autotest_common.sh@1508 -- # bdfs=() 00:05:35.614 14:46:08 -- common/autotest_common.sh@1508 -- # local bdfs 00:05:35.614 14:46:08 -- common/autotest_common.sh@1509 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:05:35.614 14:46:08 -- common/autotest_common.sh@1509 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:05:35.614 14:46:08 -- common/autotest_common.sh@1509 -- # jq -r '.config[].params.traddr' 00:05:35.614 14:46:08 -- common/autotest_common.sh@1510 -- # (( 2 == 0 )) 00:05:35.614 14:46:08 -- common/autotest_common.sh@1514 -- # printf '%s\n' 0000:00:06.0 0000:00:07.0 00:05:35.614 14:46:08 -- common/autotest_common.sh@1531 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:05:35.874 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:35.874 Waiting for block devices as requested 00:05:35.874 0000:00:06.0 (1b36 0010): uio_pci_generic -> nvme 00:05:35.874 0000:00:07.0 (1b36 0010): uio_pci_generic -> nvme 00:05:36.133 14:46:09 -- common/autotest_common.sh@1533 -- # for bdf in "${bdfs[@]}" 00:05:36.133 14:46:09 -- common/autotest_common.sh@1534 -- # get_nvme_ctrlr_from_bdf 0000:00:06.0 00:05:36.133 14:46:09 -- common/autotest_common.sh@1497 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 00:05:36.133 14:46:09 -- common/autotest_common.sh@1497 -- # grep 0000:00:06.0/nvme/nvme 00:05:36.133 14:46:09 -- common/autotest_common.sh@1497 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:06.0/nvme/nvme0 00:05:36.133 14:46:09 -- common/autotest_common.sh@1498 -- # [[ -z /sys/devices/pci0000:00/0000:00:06.0/nvme/nvme0 ]] 00:05:36.133 14:46:09 -- common/autotest_common.sh@1502 -- # basename /sys/devices/pci0000:00/0000:00:06.0/nvme/nvme0 00:05:36.133 14:46:09 -- common/autotest_common.sh@1502 -- # printf '%s\n' nvme0 00:05:36.133 14:46:09 -- common/autotest_common.sh@1534 -- # nvme_ctrlr=/dev/nvme0 00:05:36.133 14:46:09 -- common/autotest_common.sh@1535 -- # [[ -z /dev/nvme0 ]] 00:05:36.133 14:46:09 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme0 00:05:36.133 14:46:09 -- common/autotest_common.sh@1540 -- # grep oacs 00:05:36.133 14:46:09 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:05:36.133 14:46:09 -- common/autotest_common.sh@1540 -- # oacs=' 0x12a' 00:05:36.133 14:46:09 -- common/autotest_common.sh@1541 -- # oacs_ns_manage=8 00:05:36.133 14:46:09 -- common/autotest_common.sh@1543 -- # [[ 8 -ne 0 ]] 00:05:36.133 14:46:09 -- common/autotest_common.sh@1549 -- # nvme id-ctrl /dev/nvme0 00:05:36.133 14:46:09 -- common/autotest_common.sh@1549 -- # grep unvmcap 00:05:36.133 14:46:09 -- common/autotest_common.sh@1549 -- # cut -d: -f2 00:05:36.133 14:46:09 -- common/autotest_common.sh@1549 -- # unvmcap=' 0' 00:05:36.133 14:46:09 -- common/autotest_common.sh@1550 -- # [[ 0 -eq 0 ]] 00:05:36.133 14:46:09 -- common/autotest_common.sh@1552 -- # continue 00:05:36.133 14:46:09 -- common/autotest_common.sh@1533 -- # for bdf in "${bdfs[@]}" 00:05:36.133 14:46:09 -- common/autotest_common.sh@1534 -- # get_nvme_ctrlr_from_bdf 0000:00:07.0 00:05:36.133 14:46:09 -- common/autotest_common.sh@1497 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 00:05:36.133 14:46:09 -- common/autotest_common.sh@1497 -- # grep 0000:00:07.0/nvme/nvme 00:05:36.133 14:46:09 -- common/autotest_common.sh@1497 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:07.0/nvme/nvme1 00:05:36.133 14:46:09 -- common/autotest_common.sh@1498 -- # [[ -z /sys/devices/pci0000:00/0000:00:07.0/nvme/nvme1 ]] 00:05:36.133 14:46:09 -- common/autotest_common.sh@1502 -- # basename /sys/devices/pci0000:00/0000:00:07.0/nvme/nvme1 00:05:36.133 14:46:09 -- common/autotest_common.sh@1502 -- # printf '%s\n' nvme1 00:05:36.133 14:46:09 -- common/autotest_common.sh@1534 -- # nvme_ctrlr=/dev/nvme1 00:05:36.133 14:46:09 -- common/autotest_common.sh@1535 -- # [[ -z /dev/nvme1 ]] 00:05:36.133 14:46:09 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme1 00:05:36.133 14:46:09 -- common/autotest_common.sh@1540 -- # grep oacs 00:05:36.133 14:46:09 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:05:36.133 14:46:09 -- common/autotest_common.sh@1540 -- # oacs=' 0x12a' 00:05:36.133 14:46:09 -- common/autotest_common.sh@1541 -- # oacs_ns_manage=8 00:05:36.133 14:46:09 -- common/autotest_common.sh@1543 -- # [[ 8 -ne 0 ]] 00:05:36.133 14:46:09 -- common/autotest_common.sh@1549 -- # nvme id-ctrl /dev/nvme1 00:05:36.133 14:46:09 -- common/autotest_common.sh@1549 -- # grep unvmcap 00:05:36.133 14:46:09 -- common/autotest_common.sh@1549 -- # cut -d: -f2 00:05:36.133 14:46:09 -- common/autotest_common.sh@1549 -- # unvmcap=' 0' 00:05:36.133 14:46:09 -- common/autotest_common.sh@1550 -- # [[ 0 -eq 0 ]] 00:05:36.133 14:46:09 -- common/autotest_common.sh@1552 -- # continue 00:05:36.133 14:46:09 -- spdk/autotest.sh@133 -- # timing_exit pre_cleanup 00:05:36.133 14:46:09 -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:36.133 14:46:09 -- common/autotest_common.sh@10 -- # set +x 00:05:36.133 14:46:09 -- spdk/autotest.sh@136 -- # timing_enter afterboot 00:05:36.133 14:46:09 -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:36.133 14:46:09 -- common/autotest_common.sh@10 -- # set +x 00:05:36.133 14:46:09 -- spdk/autotest.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:37.072 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:37.072 0000:00:06.0 (1b36 0010): nvme -> uio_pci_generic 00:05:37.072 0000:00:07.0 (1b36 0010): nvme -> uio_pci_generic 00:05:37.072 14:46:10 -- spdk/autotest.sh@138 -- # timing_exit afterboot 00:05:37.072 14:46:10 -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:37.072 14:46:10 -- common/autotest_common.sh@10 -- # set +x 00:05:37.072 14:46:10 -- spdk/autotest.sh@142 -- # opal_revert_cleanup 00:05:37.072 14:46:10 -- common/autotest_common.sh@1586 -- # mapfile -t bdfs 00:05:37.072 14:46:10 -- common/autotest_common.sh@1586 -- # get_nvme_bdfs_by_id 0x0a54 00:05:37.072 14:46:10 -- common/autotest_common.sh@1572 -- # bdfs=() 00:05:37.072 14:46:10 -- common/autotest_common.sh@1572 -- # local bdfs 00:05:37.072 14:46:10 -- common/autotest_common.sh@1574 -- # get_nvme_bdfs 00:05:37.072 14:46:10 -- common/autotest_common.sh@1508 -- # bdfs=() 00:05:37.072 14:46:10 -- common/autotest_common.sh@1508 -- # local bdfs 00:05:37.072 14:46:10 -- common/autotest_common.sh@1509 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:05:37.072 14:46:10 -- common/autotest_common.sh@1509 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:05:37.072 14:46:10 -- common/autotest_common.sh@1509 -- # jq -r '.config[].params.traddr' 00:05:37.072 14:46:10 -- common/autotest_common.sh@1510 -- # (( 2 == 0 )) 00:05:37.072 14:46:10 -- common/autotest_common.sh@1514 -- # printf '%s\n' 0000:00:06.0 0000:00:07.0 00:05:37.072 14:46:10 -- common/autotest_common.sh@1574 -- # for bdf in $(get_nvme_bdfs) 00:05:37.072 14:46:10 -- common/autotest_common.sh@1575 -- # cat /sys/bus/pci/devices/0000:00:06.0/device 00:05:37.072 14:46:10 -- common/autotest_common.sh@1575 -- # device=0x0010 00:05:37.072 14:46:10 -- common/autotest_common.sh@1576 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:05:37.072 14:46:10 -- common/autotest_common.sh@1574 -- # for bdf in $(get_nvme_bdfs) 00:05:37.072 14:46:10 -- common/autotest_common.sh@1575 -- # cat /sys/bus/pci/devices/0000:00:07.0/device 00:05:37.072 14:46:10 -- common/autotest_common.sh@1575 -- # device=0x0010 00:05:37.072 14:46:10 -- common/autotest_common.sh@1576 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:05:37.072 14:46:10 -- common/autotest_common.sh@1581 -- # printf '%s\n' 00:05:37.072 14:46:10 -- common/autotest_common.sh@1587 -- # [[ -z '' ]] 00:05:37.072 14:46:10 -- common/autotest_common.sh@1588 -- # return 0 00:05:37.072 14:46:10 -- spdk/autotest.sh@148 -- # '[' 0 -eq 1 ']' 00:05:37.072 14:46:10 -- spdk/autotest.sh@152 -- # '[' 1 -eq 1 ']' 00:05:37.072 14:46:10 -- spdk/autotest.sh@153 -- # [[ 0 -eq 1 ]] 00:05:37.072 14:46:10 -- spdk/autotest.sh@153 -- # [[ 0 -eq 1 ]] 00:05:37.072 14:46:10 -- spdk/autotest.sh@160 -- # timing_enter lib 00:05:37.072 14:46:10 -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:37.072 14:46:10 -- common/autotest_common.sh@10 -- # set +x 00:05:37.072 14:46:10 -- spdk/autotest.sh@162 -- # run_test env /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:05:37.072 14:46:10 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:37.072 14:46:10 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:37.072 14:46:10 -- common/autotest_common.sh@10 -- # set +x 00:05:37.331 ************************************ 00:05:37.331 START TEST env 00:05:37.331 ************************************ 00:05:37.331 14:46:10 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:05:37.331 * Looking for test storage... 00:05:37.331 * Found test storage at /home/vagrant/spdk_repo/spdk/test/env 00:05:37.331 14:46:10 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:05:37.331 14:46:10 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:05:37.331 14:46:10 -- common/autotest_common.sh@1690 -- # lcov --version 00:05:37.331 14:46:10 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:05:37.331 14:46:10 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:05:37.331 14:46:10 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:05:37.331 14:46:10 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:05:37.331 14:46:10 -- scripts/common.sh@335 -- # IFS=.-: 00:05:37.331 14:46:10 -- scripts/common.sh@335 -- # read -ra ver1 00:05:37.331 14:46:10 -- scripts/common.sh@336 -- # IFS=.-: 00:05:37.331 14:46:10 -- scripts/common.sh@336 -- # read -ra ver2 00:05:37.331 14:46:10 -- scripts/common.sh@337 -- # local 'op=<' 00:05:37.331 14:46:10 -- scripts/common.sh@339 -- # ver1_l=2 00:05:37.331 14:46:10 -- scripts/common.sh@340 -- # ver2_l=1 00:05:37.331 14:46:10 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:05:37.331 14:46:10 -- scripts/common.sh@343 -- # case "$op" in 00:05:37.331 14:46:10 -- scripts/common.sh@344 -- # : 1 00:05:37.331 14:46:10 -- scripts/common.sh@363 -- # (( v = 0 )) 00:05:37.331 14:46:10 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:37.331 14:46:10 -- scripts/common.sh@364 -- # decimal 1 00:05:37.331 14:46:10 -- scripts/common.sh@352 -- # local d=1 00:05:37.331 14:46:10 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:37.331 14:46:10 -- scripts/common.sh@354 -- # echo 1 00:05:37.331 14:46:10 -- scripts/common.sh@364 -- # ver1[v]=1 00:05:37.331 14:46:10 -- scripts/common.sh@365 -- # decimal 2 00:05:37.331 14:46:10 -- scripts/common.sh@352 -- # local d=2 00:05:37.331 14:46:10 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:37.331 14:46:10 -- scripts/common.sh@354 -- # echo 2 00:05:37.331 14:46:10 -- scripts/common.sh@365 -- # ver2[v]=2 00:05:37.331 14:46:10 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:05:37.331 14:46:10 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:05:37.331 14:46:10 -- scripts/common.sh@367 -- # return 0 00:05:37.331 14:46:10 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:37.331 14:46:10 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:05:37.331 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:37.331 --rc genhtml_branch_coverage=1 00:05:37.331 --rc genhtml_function_coverage=1 00:05:37.331 --rc genhtml_legend=1 00:05:37.331 --rc geninfo_all_blocks=1 00:05:37.331 --rc geninfo_unexecuted_blocks=1 00:05:37.331 00:05:37.331 ' 00:05:37.331 14:46:10 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:05:37.331 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:37.331 --rc genhtml_branch_coverage=1 00:05:37.331 --rc genhtml_function_coverage=1 00:05:37.331 --rc genhtml_legend=1 00:05:37.331 --rc geninfo_all_blocks=1 00:05:37.331 --rc geninfo_unexecuted_blocks=1 00:05:37.331 00:05:37.331 ' 00:05:37.331 14:46:10 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:05:37.331 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:37.331 --rc genhtml_branch_coverage=1 00:05:37.331 --rc genhtml_function_coverage=1 00:05:37.331 --rc genhtml_legend=1 00:05:37.331 --rc geninfo_all_blocks=1 00:05:37.331 --rc geninfo_unexecuted_blocks=1 00:05:37.331 00:05:37.331 ' 00:05:37.331 14:46:10 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:05:37.331 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:37.331 --rc genhtml_branch_coverage=1 00:05:37.331 --rc genhtml_function_coverage=1 00:05:37.331 --rc genhtml_legend=1 00:05:37.331 --rc geninfo_all_blocks=1 00:05:37.331 --rc geninfo_unexecuted_blocks=1 00:05:37.332 00:05:37.332 ' 00:05:37.332 14:46:10 -- env/env.sh@10 -- # run_test env_memory /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:05:37.332 14:46:10 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:37.332 14:46:10 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:37.332 14:46:10 -- common/autotest_common.sh@10 -- # set +x 00:05:37.332 ************************************ 00:05:37.332 START TEST env_memory 00:05:37.332 ************************************ 00:05:37.332 14:46:10 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:05:37.332 00:05:37.332 00:05:37.332 CUnit - A unit testing framework for C - Version 2.1-3 00:05:37.332 http://cunit.sourceforge.net/ 00:05:37.332 00:05:37.332 00:05:37.332 Suite: memory 00:05:37.591 Test: alloc and free memory map ...[2024-12-01 14:46:10.456237] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:05:37.591 passed 00:05:37.591 Test: mem map translation ...[2024-12-01 14:46:10.487349] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:05:37.591 [2024-12-01 14:46:10.487386] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:05:37.591 [2024-12-01 14:46:10.487441] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 584:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:05:37.591 [2024-12-01 14:46:10.487453] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 600:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:05:37.591 passed 00:05:37.591 Test: mem map registration ...[2024-12-01 14:46:10.551108] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x200000 len=1234 00:05:37.591 [2024-12-01 14:46:10.551141] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x4d2 len=2097152 00:05:37.591 passed 00:05:37.591 Test: mem map adjacent registrations ...passed 00:05:37.591 00:05:37.591 Run Summary: Type Total Ran Passed Failed Inactive 00:05:37.591 suites 1 1 n/a 0 0 00:05:37.591 tests 4 4 4 0 0 00:05:37.591 asserts 152 152 152 0 n/a 00:05:37.591 00:05:37.591 Elapsed time = 0.213 seconds 00:05:37.591 00:05:37.591 real 0m0.231s 00:05:37.591 user 0m0.215s 00:05:37.591 sys 0m0.011s 00:05:37.591 14:46:10 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:37.591 14:46:10 -- common/autotest_common.sh@10 -- # set +x 00:05:37.591 ************************************ 00:05:37.591 END TEST env_memory 00:05:37.591 ************************************ 00:05:37.591 14:46:10 -- env/env.sh@11 -- # run_test env_vtophys /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:05:37.591 14:46:10 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:37.591 14:46:10 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:37.591 14:46:10 -- common/autotest_common.sh@10 -- # set +x 00:05:37.591 ************************************ 00:05:37.591 START TEST env_vtophys 00:05:37.591 ************************************ 00:05:37.591 14:46:10 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:05:37.852 EAL: lib.eal log level changed from notice to debug 00:05:37.852 EAL: Detected lcore 0 as core 0 on socket 0 00:05:37.852 EAL: Detected lcore 1 as core 0 on socket 0 00:05:37.852 EAL: Detected lcore 2 as core 0 on socket 0 00:05:37.852 EAL: Detected lcore 3 as core 0 on socket 0 00:05:37.852 EAL: Detected lcore 4 as core 0 on socket 0 00:05:37.852 EAL: Detected lcore 5 as core 0 on socket 0 00:05:37.852 EAL: Detected lcore 6 as core 0 on socket 0 00:05:37.852 EAL: Detected lcore 7 as core 0 on socket 0 00:05:37.852 EAL: Detected lcore 8 as core 0 on socket 0 00:05:37.852 EAL: Detected lcore 9 as core 0 on socket 0 00:05:37.852 EAL: Maximum logical cores by configuration: 128 00:05:37.852 EAL: Detected CPU lcores: 10 00:05:37.852 EAL: Detected NUMA nodes: 1 00:05:37.852 EAL: Checking presence of .so 'librte_eal.so.24.0' 00:05:37.852 EAL: Detected shared linkage of DPDK 00:05:37.852 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_pci.so.24.0 00:05:37.852 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_vdev.so.24.0 00:05:37.852 EAL: Registered [vdev] bus. 00:05:37.852 EAL: bus.vdev log level changed from disabled to notice 00:05:37.852 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_mempool_ring.so.24.0 00:05:37.852 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_net_i40e.so.24.0 00:05:37.852 EAL: pmd.net.i40e.init log level changed from disabled to notice 00:05:37.852 EAL: pmd.net.i40e.driver log level changed from disabled to notice 00:05:37.852 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_pci.so 00:05:37.852 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_vdev.so 00:05:37.852 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_mempool_ring.so 00:05:37.852 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_net_i40e.so 00:05:37.852 EAL: No shared files mode enabled, IPC will be disabled 00:05:37.852 EAL: No shared files mode enabled, IPC is disabled 00:05:37.852 EAL: Selected IOVA mode 'PA' 00:05:37.852 EAL: Probing VFIO support... 00:05:37.852 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:05:37.852 EAL: VFIO modules not loaded, skipping VFIO support... 00:05:37.852 EAL: Ask a virtual area of 0x2e000 bytes 00:05:37.852 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:05:37.852 EAL: Setting up physically contiguous memory... 00:05:37.852 EAL: Setting maximum number of open files to 524288 00:05:37.852 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:05:37.852 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:05:37.852 EAL: Ask a virtual area of 0x61000 bytes 00:05:37.852 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:05:37.852 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:37.852 EAL: Ask a virtual area of 0x400000000 bytes 00:05:37.852 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:05:37.852 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:05:37.852 EAL: Ask a virtual area of 0x61000 bytes 00:05:37.852 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:05:37.852 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:37.852 EAL: Ask a virtual area of 0x400000000 bytes 00:05:37.852 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:05:37.852 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:05:37.852 EAL: Ask a virtual area of 0x61000 bytes 00:05:37.852 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:05:37.852 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:37.852 EAL: Ask a virtual area of 0x400000000 bytes 00:05:37.852 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:05:37.852 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:05:37.852 EAL: Ask a virtual area of 0x61000 bytes 00:05:37.852 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:05:37.852 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:37.852 EAL: Ask a virtual area of 0x400000000 bytes 00:05:37.852 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:05:37.852 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:05:37.852 EAL: Hugepages will be freed exactly as allocated. 00:05:37.852 EAL: No shared files mode enabled, IPC is disabled 00:05:37.852 EAL: No shared files mode enabled, IPC is disabled 00:05:37.852 EAL: TSC frequency is ~2200000 KHz 00:05:37.852 EAL: Main lcore 0 is ready (tid=7fb70b242a00;cpuset=[0]) 00:05:37.852 EAL: Trying to obtain current memory policy. 00:05:37.852 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:37.852 EAL: Restoring previous memory policy: 0 00:05:37.852 EAL: request: mp_malloc_sync 00:05:37.852 EAL: No shared files mode enabled, IPC is disabled 00:05:37.852 EAL: Heap on socket 0 was expanded by 2MB 00:05:37.852 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:05:37.852 EAL: No shared files mode enabled, IPC is disabled 00:05:37.852 EAL: No PCI address specified using 'addr=' in: bus=pci 00:05:37.852 EAL: Mem event callback 'spdk:(nil)' registered 00:05:37.852 EAL: Module /sys/module/vfio_pci not found! error 2 (No such file or directory) 00:05:37.852 00:05:37.852 00:05:37.852 CUnit - A unit testing framework for C - Version 2.1-3 00:05:37.852 http://cunit.sourceforge.net/ 00:05:37.852 00:05:37.852 00:05:37.852 Suite: components_suite 00:05:37.852 Test: vtophys_malloc_test ...passed 00:05:37.852 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:05:37.852 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:37.852 EAL: Restoring previous memory policy: 4 00:05:37.852 EAL: Calling mem event callback 'spdk:(nil)' 00:05:37.852 EAL: request: mp_malloc_sync 00:05:37.852 EAL: No shared files mode enabled, IPC is disabled 00:05:37.852 EAL: Heap on socket 0 was expanded by 4MB 00:05:37.852 EAL: Calling mem event callback 'spdk:(nil)' 00:05:37.852 EAL: request: mp_malloc_sync 00:05:37.852 EAL: No shared files mode enabled, IPC is disabled 00:05:37.852 EAL: Heap on socket 0 was shrunk by 4MB 00:05:37.852 EAL: Trying to obtain current memory policy. 00:05:37.852 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:37.852 EAL: Restoring previous memory policy: 4 00:05:37.852 EAL: Calling mem event callback 'spdk:(nil)' 00:05:37.853 EAL: request: mp_malloc_sync 00:05:37.853 EAL: No shared files mode enabled, IPC is disabled 00:05:37.853 EAL: Heap on socket 0 was expanded by 6MB 00:05:37.853 EAL: Calling mem event callback 'spdk:(nil)' 00:05:37.853 EAL: request: mp_malloc_sync 00:05:37.853 EAL: No shared files mode enabled, IPC is disabled 00:05:37.853 EAL: Heap on socket 0 was shrunk by 6MB 00:05:37.853 EAL: Trying to obtain current memory policy. 00:05:37.853 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:37.853 EAL: Restoring previous memory policy: 4 00:05:37.853 EAL: Calling mem event callback 'spdk:(nil)' 00:05:37.853 EAL: request: mp_malloc_sync 00:05:37.853 EAL: No shared files mode enabled, IPC is disabled 00:05:37.853 EAL: Heap on socket 0 was expanded by 10MB 00:05:37.853 EAL: Calling mem event callback 'spdk:(nil)' 00:05:37.853 EAL: request: mp_malloc_sync 00:05:37.853 EAL: No shared files mode enabled, IPC is disabled 00:05:37.853 EAL: Heap on socket 0 was shrunk by 10MB 00:05:37.853 EAL: Trying to obtain current memory policy. 00:05:37.853 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:37.853 EAL: Restoring previous memory policy: 4 00:05:37.853 EAL: Calling mem event callback 'spdk:(nil)' 00:05:37.853 EAL: request: mp_malloc_sync 00:05:37.853 EAL: No shared files mode enabled, IPC is disabled 00:05:37.853 EAL: Heap on socket 0 was expanded by 18MB 00:05:37.853 EAL: Calling mem event callback 'spdk:(nil)' 00:05:37.853 EAL: request: mp_malloc_sync 00:05:37.853 EAL: No shared files mode enabled, IPC is disabled 00:05:37.853 EAL: Heap on socket 0 was shrunk by 18MB 00:05:37.853 EAL: Trying to obtain current memory policy. 00:05:37.853 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:37.853 EAL: Restoring previous memory policy: 4 00:05:37.853 EAL: Calling mem event callback 'spdk:(nil)' 00:05:37.853 EAL: request: mp_malloc_sync 00:05:37.853 EAL: No shared files mode enabled, IPC is disabled 00:05:37.853 EAL: Heap on socket 0 was expanded by 34MB 00:05:37.853 EAL: Calling mem event callback 'spdk:(nil)' 00:05:37.853 EAL: request: mp_malloc_sync 00:05:37.853 EAL: No shared files mode enabled, IPC is disabled 00:05:37.853 EAL: Heap on socket 0 was shrunk by 34MB 00:05:37.853 EAL: Trying to obtain current memory policy. 00:05:37.853 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:37.853 EAL: Restoring previous memory policy: 4 00:05:37.853 EAL: Calling mem event callback 'spdk:(nil)' 00:05:37.853 EAL: request: mp_malloc_sync 00:05:37.853 EAL: No shared files mode enabled, IPC is disabled 00:05:37.853 EAL: Heap on socket 0 was expanded by 66MB 00:05:37.853 EAL: Calling mem event callback 'spdk:(nil)' 00:05:37.853 EAL: request: mp_malloc_sync 00:05:37.853 EAL: No shared files mode enabled, IPC is disabled 00:05:37.853 EAL: Heap on socket 0 was shrunk by 66MB 00:05:37.853 EAL: Trying to obtain current memory policy. 00:05:37.853 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:37.853 EAL: Restoring previous memory policy: 4 00:05:37.853 EAL: Calling mem event callback 'spdk:(nil)' 00:05:37.853 EAL: request: mp_malloc_sync 00:05:37.853 EAL: No shared files mode enabled, IPC is disabled 00:05:37.853 EAL: Heap on socket 0 was expanded by 130MB 00:05:38.112 EAL: Calling mem event callback 'spdk:(nil)' 00:05:38.112 EAL: request: mp_malloc_sync 00:05:38.112 EAL: No shared files mode enabled, IPC is disabled 00:05:38.112 EAL: Heap on socket 0 was shrunk by 130MB 00:05:38.112 EAL: Trying to obtain current memory policy. 00:05:38.112 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:38.112 EAL: Restoring previous memory policy: 4 00:05:38.112 EAL: Calling mem event callback 'spdk:(nil)' 00:05:38.112 EAL: request: mp_malloc_sync 00:05:38.112 EAL: No shared files mode enabled, IPC is disabled 00:05:38.112 EAL: Heap on socket 0 was expanded by 258MB 00:05:38.112 EAL: Calling mem event callback 'spdk:(nil)' 00:05:38.112 EAL: request: mp_malloc_sync 00:05:38.112 EAL: No shared files mode enabled, IPC is disabled 00:05:38.112 EAL: Heap on socket 0 was shrunk by 258MB 00:05:38.112 EAL: Trying to obtain current memory policy. 00:05:38.112 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:38.371 EAL: Restoring previous memory policy: 4 00:05:38.372 EAL: Calling mem event callback 'spdk:(nil)' 00:05:38.372 EAL: request: mp_malloc_sync 00:05:38.372 EAL: No shared files mode enabled, IPC is disabled 00:05:38.372 EAL: Heap on socket 0 was expanded by 514MB 00:05:38.372 EAL: Calling mem event callback 'spdk:(nil)' 00:05:38.631 EAL: request: mp_malloc_sync 00:05:38.631 EAL: No shared files mode enabled, IPC is disabled 00:05:38.631 EAL: Heap on socket 0 was shrunk by 514MB 00:05:38.631 EAL: Trying to obtain current memory policy. 00:05:38.631 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:38.631 EAL: Restoring previous memory policy: 4 00:05:38.631 EAL: Calling mem event callback 'spdk:(nil)' 00:05:38.631 EAL: request: mp_malloc_sync 00:05:38.631 EAL: No shared files mode enabled, IPC is disabled 00:05:38.631 EAL: Heap on socket 0 was expanded by 1026MB 00:05:38.891 EAL: Calling mem event callback 'spdk:(nil)' 00:05:39.150 passed 00:05:39.150 00:05:39.150 Run Summary: Type Total Ran Passed Failed Inactive 00:05:39.151 suites 1 1 n/a 0 0 00:05:39.151 tests 2 2 2 0 0 00:05:39.151 asserts 5274 5274 5274 0 n/a 00:05:39.151 00:05:39.151 Elapsed time = 1.227 seconds 00:05:39.151 EAL: request: mp_malloc_sync 00:05:39.151 EAL: No shared files mode enabled, IPC is disabled 00:05:39.151 EAL: Heap on socket 0 was shrunk by 1026MB 00:05:39.151 EAL: Calling mem event callback 'spdk:(nil)' 00:05:39.151 EAL: request: mp_malloc_sync 00:05:39.151 EAL: No shared files mode enabled, IPC is disabled 00:05:39.151 EAL: Heap on socket 0 was shrunk by 2MB 00:05:39.151 EAL: No shared files mode enabled, IPC is disabled 00:05:39.151 EAL: No shared files mode enabled, IPC is disabled 00:05:39.151 EAL: No shared files mode enabled, IPC is disabled 00:05:39.151 00:05:39.151 real 0m1.419s 00:05:39.151 user 0m0.777s 00:05:39.151 sys 0m0.513s 00:05:39.151 14:46:12 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:39.151 14:46:12 -- common/autotest_common.sh@10 -- # set +x 00:05:39.151 ************************************ 00:05:39.151 END TEST env_vtophys 00:05:39.151 ************************************ 00:05:39.151 14:46:12 -- env/env.sh@12 -- # run_test env_pci /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:05:39.151 14:46:12 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:39.151 14:46:12 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:39.151 14:46:12 -- common/autotest_common.sh@10 -- # set +x 00:05:39.151 ************************************ 00:05:39.151 START TEST env_pci 00:05:39.151 ************************************ 00:05:39.151 14:46:12 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:05:39.151 00:05:39.151 00:05:39.151 CUnit - A unit testing framework for C - Version 2.1-3 00:05:39.151 http://cunit.sourceforge.net/ 00:05:39.151 00:05:39.151 00:05:39.151 Suite: pci 00:05:39.151 Test: pci_hook ...[2024-12-01 14:46:12.183315] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/pci.c:1040:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 67491 has claimed it 00:05:39.151 passed 00:05:39.151 00:05:39.151 Run Summary: Type Total Ran Passed Failed Inactive 00:05:39.151 suites 1 1 n/a 0 0 00:05:39.151 tests 1 1 1 0 0 00:05:39.151 asserts 25 25 25 0 n/a 00:05:39.151 00:05:39.151 Elapsed time = 0.002 seconds 00:05:39.151 EAL: Cannot find device (10000:00:01.0) 00:05:39.151 EAL: Failed to attach device on primary process 00:05:39.151 00:05:39.151 real 0m0.021s 00:05:39.151 user 0m0.008s 00:05:39.151 sys 0m0.012s 00:05:39.151 14:46:12 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:39.151 14:46:12 -- common/autotest_common.sh@10 -- # set +x 00:05:39.151 ************************************ 00:05:39.151 END TEST env_pci 00:05:39.151 ************************************ 00:05:39.151 14:46:12 -- env/env.sh@14 -- # argv='-c 0x1 ' 00:05:39.151 14:46:12 -- env/env.sh@15 -- # uname 00:05:39.151 14:46:12 -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:05:39.151 14:46:12 -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:05:39.151 14:46:12 -- env/env.sh@24 -- # run_test env_dpdk_post_init /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:05:39.151 14:46:12 -- common/autotest_common.sh@1087 -- # '[' 5 -le 1 ']' 00:05:39.151 14:46:12 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:39.151 14:46:12 -- common/autotest_common.sh@10 -- # set +x 00:05:39.151 ************************************ 00:05:39.151 START TEST env_dpdk_post_init 00:05:39.151 ************************************ 00:05:39.151 14:46:12 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:05:39.411 EAL: Detected CPU lcores: 10 00:05:39.411 EAL: Detected NUMA nodes: 1 00:05:39.411 EAL: Detected shared linkage of DPDK 00:05:39.411 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:05:39.411 EAL: Selected IOVA mode 'PA' 00:05:39.411 TELEMETRY: No legacy callbacks, legacy socket not created 00:05:39.411 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:06.0 (socket -1) 00:05:39.411 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:07.0 (socket -1) 00:05:39.411 Starting DPDK initialization... 00:05:39.411 Starting SPDK post initialization... 00:05:39.411 SPDK NVMe probe 00:05:39.411 Attaching to 0000:00:06.0 00:05:39.411 Attaching to 0000:00:07.0 00:05:39.411 Attached to 0000:00:06.0 00:05:39.411 Attached to 0000:00:07.0 00:05:39.411 Cleaning up... 00:05:39.411 00:05:39.411 real 0m0.167s 00:05:39.411 user 0m0.042s 00:05:39.411 sys 0m0.025s 00:05:39.411 14:46:12 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:39.411 14:46:12 -- common/autotest_common.sh@10 -- # set +x 00:05:39.411 ************************************ 00:05:39.411 END TEST env_dpdk_post_init 00:05:39.411 ************************************ 00:05:39.411 14:46:12 -- env/env.sh@26 -- # uname 00:05:39.411 14:46:12 -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:05:39.411 14:46:12 -- env/env.sh@29 -- # run_test env_mem_callbacks /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:05:39.411 14:46:12 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:39.411 14:46:12 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:39.411 14:46:12 -- common/autotest_common.sh@10 -- # set +x 00:05:39.411 ************************************ 00:05:39.411 START TEST env_mem_callbacks 00:05:39.411 ************************************ 00:05:39.411 14:46:12 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:05:39.411 EAL: Detected CPU lcores: 10 00:05:39.411 EAL: Detected NUMA nodes: 1 00:05:39.411 EAL: Detected shared linkage of DPDK 00:05:39.411 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:05:39.411 EAL: Selected IOVA mode 'PA' 00:05:39.670 TELEMETRY: No legacy callbacks, legacy socket not created 00:05:39.670 00:05:39.670 00:05:39.670 CUnit - A unit testing framework for C - Version 2.1-3 00:05:39.670 http://cunit.sourceforge.net/ 00:05:39.670 00:05:39.670 00:05:39.670 Suite: memory 00:05:39.670 Test: test ... 00:05:39.670 register 0x200000200000 2097152 00:05:39.670 malloc 3145728 00:05:39.670 register 0x200000400000 4194304 00:05:39.670 buf 0x200000500000 len 3145728 PASSED 00:05:39.670 malloc 64 00:05:39.670 buf 0x2000004fff40 len 64 PASSED 00:05:39.670 malloc 4194304 00:05:39.670 register 0x200000800000 6291456 00:05:39.670 buf 0x200000a00000 len 4194304 PASSED 00:05:39.670 free 0x200000500000 3145728 00:05:39.670 free 0x2000004fff40 64 00:05:39.670 unregister 0x200000400000 4194304 PASSED 00:05:39.670 free 0x200000a00000 4194304 00:05:39.670 unregister 0x200000800000 6291456 PASSED 00:05:39.670 malloc 8388608 00:05:39.670 register 0x200000400000 10485760 00:05:39.670 buf 0x200000600000 len 8388608 PASSED 00:05:39.670 free 0x200000600000 8388608 00:05:39.670 unregister 0x200000400000 10485760 PASSED 00:05:39.670 passed 00:05:39.670 00:05:39.670 Run Summary: Type Total Ran Passed Failed Inactive 00:05:39.670 suites 1 1 n/a 0 0 00:05:39.670 tests 1 1 1 0 0 00:05:39.670 asserts 15 15 15 0 n/a 00:05:39.670 00:05:39.670 Elapsed time = 0.009 seconds 00:05:39.670 00:05:39.670 real 0m0.147s 00:05:39.670 user 0m0.022s 00:05:39.670 sys 0m0.023s 00:05:39.670 14:46:12 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:39.670 14:46:12 -- common/autotest_common.sh@10 -- # set +x 00:05:39.671 ************************************ 00:05:39.671 END TEST env_mem_callbacks 00:05:39.671 ************************************ 00:05:39.671 00:05:39.671 real 0m2.482s 00:05:39.671 user 0m1.262s 00:05:39.671 sys 0m0.850s 00:05:39.671 14:46:12 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:39.671 14:46:12 -- common/autotest_common.sh@10 -- # set +x 00:05:39.671 ************************************ 00:05:39.671 END TEST env 00:05:39.671 ************************************ 00:05:39.671 14:46:12 -- spdk/autotest.sh@163 -- # run_test rpc /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:05:39.671 14:46:12 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:39.671 14:46:12 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:39.671 14:46:12 -- common/autotest_common.sh@10 -- # set +x 00:05:39.671 ************************************ 00:05:39.671 START TEST rpc 00:05:39.671 ************************************ 00:05:39.671 14:46:12 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:05:39.930 * Looking for test storage... 00:05:39.930 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:05:39.930 14:46:12 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:05:39.930 14:46:12 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:05:39.930 14:46:12 -- common/autotest_common.sh@1690 -- # lcov --version 00:05:39.930 14:46:12 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:05:39.930 14:46:12 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:05:39.930 14:46:12 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:05:39.930 14:46:12 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:05:39.930 14:46:12 -- scripts/common.sh@335 -- # IFS=.-: 00:05:39.930 14:46:12 -- scripts/common.sh@335 -- # read -ra ver1 00:05:39.930 14:46:12 -- scripts/common.sh@336 -- # IFS=.-: 00:05:39.930 14:46:12 -- scripts/common.sh@336 -- # read -ra ver2 00:05:39.930 14:46:12 -- scripts/common.sh@337 -- # local 'op=<' 00:05:39.930 14:46:12 -- scripts/common.sh@339 -- # ver1_l=2 00:05:39.930 14:46:12 -- scripts/common.sh@340 -- # ver2_l=1 00:05:39.930 14:46:12 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:05:39.930 14:46:12 -- scripts/common.sh@343 -- # case "$op" in 00:05:39.930 14:46:12 -- scripts/common.sh@344 -- # : 1 00:05:39.930 14:46:12 -- scripts/common.sh@363 -- # (( v = 0 )) 00:05:39.930 14:46:12 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:39.930 14:46:12 -- scripts/common.sh@364 -- # decimal 1 00:05:39.930 14:46:12 -- scripts/common.sh@352 -- # local d=1 00:05:39.930 14:46:12 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:39.930 14:46:12 -- scripts/common.sh@354 -- # echo 1 00:05:39.930 14:46:12 -- scripts/common.sh@364 -- # ver1[v]=1 00:05:39.930 14:46:12 -- scripts/common.sh@365 -- # decimal 2 00:05:39.930 14:46:12 -- scripts/common.sh@352 -- # local d=2 00:05:39.930 14:46:12 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:39.930 14:46:12 -- scripts/common.sh@354 -- # echo 2 00:05:39.930 14:46:12 -- scripts/common.sh@365 -- # ver2[v]=2 00:05:39.930 14:46:12 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:05:39.930 14:46:12 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:05:39.930 14:46:12 -- scripts/common.sh@367 -- # return 0 00:05:39.930 14:46:12 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:39.930 14:46:12 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:05:39.930 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:39.930 --rc genhtml_branch_coverage=1 00:05:39.930 --rc genhtml_function_coverage=1 00:05:39.930 --rc genhtml_legend=1 00:05:39.930 --rc geninfo_all_blocks=1 00:05:39.930 --rc geninfo_unexecuted_blocks=1 00:05:39.930 00:05:39.930 ' 00:05:39.930 14:46:12 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:05:39.930 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:39.930 --rc genhtml_branch_coverage=1 00:05:39.930 --rc genhtml_function_coverage=1 00:05:39.930 --rc genhtml_legend=1 00:05:39.930 --rc geninfo_all_blocks=1 00:05:39.930 --rc geninfo_unexecuted_blocks=1 00:05:39.930 00:05:39.930 ' 00:05:39.930 14:46:12 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:05:39.930 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:39.930 --rc genhtml_branch_coverage=1 00:05:39.930 --rc genhtml_function_coverage=1 00:05:39.930 --rc genhtml_legend=1 00:05:39.930 --rc geninfo_all_blocks=1 00:05:39.930 --rc geninfo_unexecuted_blocks=1 00:05:39.930 00:05:39.930 ' 00:05:39.930 14:46:12 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:05:39.930 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:39.930 --rc genhtml_branch_coverage=1 00:05:39.930 --rc genhtml_function_coverage=1 00:05:39.930 --rc genhtml_legend=1 00:05:39.930 --rc geninfo_all_blocks=1 00:05:39.930 --rc geninfo_unexecuted_blocks=1 00:05:39.930 00:05:39.930 ' 00:05:39.930 14:46:12 -- rpc/rpc.sh@65 -- # spdk_pid=67608 00:05:39.930 14:46:12 -- rpc/rpc.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -e bdev 00:05:39.930 14:46:12 -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:39.930 14:46:12 -- rpc/rpc.sh@67 -- # waitforlisten 67608 00:05:39.930 14:46:12 -- common/autotest_common.sh@829 -- # '[' -z 67608 ']' 00:05:39.930 14:46:12 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:39.930 14:46:12 -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:39.930 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:39.930 14:46:12 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:39.930 14:46:12 -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:39.930 14:46:12 -- common/autotest_common.sh@10 -- # set +x 00:05:39.930 [2024-12-01 14:46:12.985820] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:05:39.930 [2024-12-01 14:46:12.985922] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67608 ] 00:05:40.189 [2024-12-01 14:46:13.122787] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:40.189 [2024-12-01 14:46:13.177216] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:40.189 [2024-12-01 14:46:13.177341] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:05:40.189 [2024-12-01 14:46:13.177353] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 67608' to capture a snapshot of events at runtime. 00:05:40.189 [2024-12-01 14:46:13.177361] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid67608 for offline analysis/debug. 00:05:40.189 [2024-12-01 14:46:13.177389] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:41.126 14:46:13 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:41.126 14:46:13 -- common/autotest_common.sh@862 -- # return 0 00:05:41.126 14:46:13 -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:05:41.126 14:46:13 -- rpc/rpc.sh@69 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:05:41.126 14:46:13 -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:05:41.126 14:46:13 -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:05:41.126 14:46:13 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:41.126 14:46:13 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:41.126 14:46:13 -- common/autotest_common.sh@10 -- # set +x 00:05:41.126 ************************************ 00:05:41.126 START TEST rpc_integrity 00:05:41.126 ************************************ 00:05:41.126 14:46:14 -- common/autotest_common.sh@1114 -- # rpc_integrity 00:05:41.126 14:46:14 -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:05:41.126 14:46:14 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:41.126 14:46:14 -- common/autotest_common.sh@10 -- # set +x 00:05:41.126 14:46:14 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:41.126 14:46:14 -- rpc/rpc.sh@12 -- # bdevs='[]' 00:05:41.126 14:46:14 -- rpc/rpc.sh@13 -- # jq length 00:05:41.126 14:46:14 -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:05:41.126 14:46:14 -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:05:41.126 14:46:14 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:41.126 14:46:14 -- common/autotest_common.sh@10 -- # set +x 00:05:41.126 14:46:14 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:41.126 14:46:14 -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:05:41.126 14:46:14 -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:05:41.126 14:46:14 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:41.126 14:46:14 -- common/autotest_common.sh@10 -- # set +x 00:05:41.126 14:46:14 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:41.126 14:46:14 -- rpc/rpc.sh@16 -- # bdevs='[ 00:05:41.126 { 00:05:41.126 "aliases": [ 00:05:41.126 "bd22421e-a3a6-4227-b0f8-bfc44c21d0c7" 00:05:41.126 ], 00:05:41.126 "assigned_rate_limits": { 00:05:41.126 "r_mbytes_per_sec": 0, 00:05:41.126 "rw_ios_per_sec": 0, 00:05:41.126 "rw_mbytes_per_sec": 0, 00:05:41.126 "w_mbytes_per_sec": 0 00:05:41.126 }, 00:05:41.126 "block_size": 512, 00:05:41.126 "claimed": false, 00:05:41.126 "driver_specific": {}, 00:05:41.126 "memory_domains": [ 00:05:41.126 { 00:05:41.126 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:41.126 "dma_device_type": 2 00:05:41.126 } 00:05:41.126 ], 00:05:41.126 "name": "Malloc0", 00:05:41.126 "num_blocks": 16384, 00:05:41.126 "product_name": "Malloc disk", 00:05:41.126 "supported_io_types": { 00:05:41.126 "abort": true, 00:05:41.126 "compare": false, 00:05:41.126 "compare_and_write": false, 00:05:41.126 "flush": true, 00:05:41.126 "nvme_admin": false, 00:05:41.126 "nvme_io": false, 00:05:41.126 "read": true, 00:05:41.126 "reset": true, 00:05:41.126 "unmap": true, 00:05:41.126 "write": true, 00:05:41.126 "write_zeroes": true 00:05:41.126 }, 00:05:41.126 "uuid": "bd22421e-a3a6-4227-b0f8-bfc44c21d0c7", 00:05:41.126 "zoned": false 00:05:41.126 } 00:05:41.126 ]' 00:05:41.126 14:46:14 -- rpc/rpc.sh@17 -- # jq length 00:05:41.126 14:46:14 -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:05:41.126 14:46:14 -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:05:41.126 14:46:14 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:41.126 14:46:14 -- common/autotest_common.sh@10 -- # set +x 00:05:41.126 [2024-12-01 14:46:14.159778] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:05:41.126 [2024-12-01 14:46:14.159814] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:05:41.126 [2024-12-01 14:46:14.159832] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1375b60 00:05:41.126 [2024-12-01 14:46:14.159840] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:05:41.126 [2024-12-01 14:46:14.160936] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:05:41.126 [2024-12-01 14:46:14.160968] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:05:41.126 Passthru0 00:05:41.126 14:46:14 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:41.126 14:46:14 -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:05:41.126 14:46:14 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:41.126 14:46:14 -- common/autotest_common.sh@10 -- # set +x 00:05:41.126 14:46:14 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:41.126 14:46:14 -- rpc/rpc.sh@20 -- # bdevs='[ 00:05:41.126 { 00:05:41.126 "aliases": [ 00:05:41.126 "bd22421e-a3a6-4227-b0f8-bfc44c21d0c7" 00:05:41.126 ], 00:05:41.126 "assigned_rate_limits": { 00:05:41.126 "r_mbytes_per_sec": 0, 00:05:41.126 "rw_ios_per_sec": 0, 00:05:41.126 "rw_mbytes_per_sec": 0, 00:05:41.126 "w_mbytes_per_sec": 0 00:05:41.126 }, 00:05:41.126 "block_size": 512, 00:05:41.126 "claim_type": "exclusive_write", 00:05:41.126 "claimed": true, 00:05:41.126 "driver_specific": {}, 00:05:41.126 "memory_domains": [ 00:05:41.126 { 00:05:41.126 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:41.126 "dma_device_type": 2 00:05:41.126 } 00:05:41.126 ], 00:05:41.126 "name": "Malloc0", 00:05:41.126 "num_blocks": 16384, 00:05:41.126 "product_name": "Malloc disk", 00:05:41.126 "supported_io_types": { 00:05:41.126 "abort": true, 00:05:41.126 "compare": false, 00:05:41.127 "compare_and_write": false, 00:05:41.127 "flush": true, 00:05:41.127 "nvme_admin": false, 00:05:41.127 "nvme_io": false, 00:05:41.127 "read": true, 00:05:41.127 "reset": true, 00:05:41.127 "unmap": true, 00:05:41.127 "write": true, 00:05:41.127 "write_zeroes": true 00:05:41.127 }, 00:05:41.127 "uuid": "bd22421e-a3a6-4227-b0f8-bfc44c21d0c7", 00:05:41.127 "zoned": false 00:05:41.127 }, 00:05:41.127 { 00:05:41.127 "aliases": [ 00:05:41.127 "09a09354-75a8-5cb5-a914-5ea35c44c978" 00:05:41.127 ], 00:05:41.127 "assigned_rate_limits": { 00:05:41.127 "r_mbytes_per_sec": 0, 00:05:41.127 "rw_ios_per_sec": 0, 00:05:41.127 "rw_mbytes_per_sec": 0, 00:05:41.127 "w_mbytes_per_sec": 0 00:05:41.127 }, 00:05:41.127 "block_size": 512, 00:05:41.127 "claimed": false, 00:05:41.127 "driver_specific": { 00:05:41.127 "passthru": { 00:05:41.127 "base_bdev_name": "Malloc0", 00:05:41.127 "name": "Passthru0" 00:05:41.127 } 00:05:41.127 }, 00:05:41.127 "memory_domains": [ 00:05:41.127 { 00:05:41.127 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:41.127 "dma_device_type": 2 00:05:41.127 } 00:05:41.127 ], 00:05:41.127 "name": "Passthru0", 00:05:41.127 "num_blocks": 16384, 00:05:41.127 "product_name": "passthru", 00:05:41.127 "supported_io_types": { 00:05:41.127 "abort": true, 00:05:41.127 "compare": false, 00:05:41.127 "compare_and_write": false, 00:05:41.127 "flush": true, 00:05:41.127 "nvme_admin": false, 00:05:41.127 "nvme_io": false, 00:05:41.127 "read": true, 00:05:41.127 "reset": true, 00:05:41.127 "unmap": true, 00:05:41.127 "write": true, 00:05:41.127 "write_zeroes": true 00:05:41.127 }, 00:05:41.127 "uuid": "09a09354-75a8-5cb5-a914-5ea35c44c978", 00:05:41.127 "zoned": false 00:05:41.127 } 00:05:41.127 ]' 00:05:41.127 14:46:14 -- rpc/rpc.sh@21 -- # jq length 00:05:41.127 14:46:14 -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:05:41.127 14:46:14 -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:05:41.127 14:46:14 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:41.127 14:46:14 -- common/autotest_common.sh@10 -- # set +x 00:05:41.386 14:46:14 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:41.386 14:46:14 -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:05:41.386 14:46:14 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:41.386 14:46:14 -- common/autotest_common.sh@10 -- # set +x 00:05:41.386 14:46:14 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:41.386 14:46:14 -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:05:41.386 14:46:14 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:41.386 14:46:14 -- common/autotest_common.sh@10 -- # set +x 00:05:41.386 14:46:14 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:41.386 14:46:14 -- rpc/rpc.sh@25 -- # bdevs='[]' 00:05:41.386 14:46:14 -- rpc/rpc.sh@26 -- # jq length 00:05:41.386 14:46:14 -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:05:41.386 00:05:41.386 real 0m0.311s 00:05:41.386 user 0m0.195s 00:05:41.386 sys 0m0.038s 00:05:41.386 14:46:14 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:41.386 14:46:14 -- common/autotest_common.sh@10 -- # set +x 00:05:41.386 ************************************ 00:05:41.386 END TEST rpc_integrity 00:05:41.386 ************************************ 00:05:41.386 14:46:14 -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:05:41.386 14:46:14 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:41.386 14:46:14 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:41.386 14:46:14 -- common/autotest_common.sh@10 -- # set +x 00:05:41.386 ************************************ 00:05:41.386 START TEST rpc_plugins 00:05:41.386 ************************************ 00:05:41.386 14:46:14 -- common/autotest_common.sh@1114 -- # rpc_plugins 00:05:41.386 14:46:14 -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:05:41.386 14:46:14 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:41.386 14:46:14 -- common/autotest_common.sh@10 -- # set +x 00:05:41.386 14:46:14 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:41.386 14:46:14 -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:05:41.386 14:46:14 -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:05:41.386 14:46:14 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:41.386 14:46:14 -- common/autotest_common.sh@10 -- # set +x 00:05:41.386 14:46:14 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:41.386 14:46:14 -- rpc/rpc.sh@31 -- # bdevs='[ 00:05:41.386 { 00:05:41.386 "aliases": [ 00:05:41.386 "7bed1183-95af-41f7-bb3d-2f3e531afd28" 00:05:41.386 ], 00:05:41.386 "assigned_rate_limits": { 00:05:41.386 "r_mbytes_per_sec": 0, 00:05:41.386 "rw_ios_per_sec": 0, 00:05:41.386 "rw_mbytes_per_sec": 0, 00:05:41.386 "w_mbytes_per_sec": 0 00:05:41.386 }, 00:05:41.386 "block_size": 4096, 00:05:41.386 "claimed": false, 00:05:41.386 "driver_specific": {}, 00:05:41.386 "memory_domains": [ 00:05:41.386 { 00:05:41.386 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:41.386 "dma_device_type": 2 00:05:41.386 } 00:05:41.386 ], 00:05:41.386 "name": "Malloc1", 00:05:41.386 "num_blocks": 256, 00:05:41.386 "product_name": "Malloc disk", 00:05:41.386 "supported_io_types": { 00:05:41.386 "abort": true, 00:05:41.386 "compare": false, 00:05:41.386 "compare_and_write": false, 00:05:41.386 "flush": true, 00:05:41.386 "nvme_admin": false, 00:05:41.386 "nvme_io": false, 00:05:41.386 "read": true, 00:05:41.386 "reset": true, 00:05:41.386 "unmap": true, 00:05:41.386 "write": true, 00:05:41.386 "write_zeroes": true 00:05:41.386 }, 00:05:41.386 "uuid": "7bed1183-95af-41f7-bb3d-2f3e531afd28", 00:05:41.386 "zoned": false 00:05:41.386 } 00:05:41.386 ]' 00:05:41.386 14:46:14 -- rpc/rpc.sh@32 -- # jq length 00:05:41.386 14:46:14 -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:05:41.386 14:46:14 -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:05:41.386 14:46:14 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:41.386 14:46:14 -- common/autotest_common.sh@10 -- # set +x 00:05:41.386 14:46:14 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:41.386 14:46:14 -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:05:41.386 14:46:14 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:41.386 14:46:14 -- common/autotest_common.sh@10 -- # set +x 00:05:41.386 14:46:14 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:41.386 14:46:14 -- rpc/rpc.sh@35 -- # bdevs='[]' 00:05:41.386 14:46:14 -- rpc/rpc.sh@36 -- # jq length 00:05:41.645 14:46:14 -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:05:41.645 00:05:41.645 real 0m0.161s 00:05:41.645 user 0m0.108s 00:05:41.645 sys 0m0.017s 00:05:41.645 14:46:14 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:41.645 14:46:14 -- common/autotest_common.sh@10 -- # set +x 00:05:41.645 ************************************ 00:05:41.645 END TEST rpc_plugins 00:05:41.645 ************************************ 00:05:41.645 14:46:14 -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:05:41.645 14:46:14 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:41.645 14:46:14 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:41.645 14:46:14 -- common/autotest_common.sh@10 -- # set +x 00:05:41.645 ************************************ 00:05:41.645 START TEST rpc_trace_cmd_test 00:05:41.645 ************************************ 00:05:41.645 14:46:14 -- common/autotest_common.sh@1114 -- # rpc_trace_cmd_test 00:05:41.645 14:46:14 -- rpc/rpc.sh@40 -- # local info 00:05:41.645 14:46:14 -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:05:41.645 14:46:14 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:41.645 14:46:14 -- common/autotest_common.sh@10 -- # set +x 00:05:41.645 14:46:14 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:41.645 14:46:14 -- rpc/rpc.sh@42 -- # info='{ 00:05:41.645 "bdev": { 00:05:41.645 "mask": "0x8", 00:05:41.645 "tpoint_mask": "0xffffffffffffffff" 00:05:41.646 }, 00:05:41.646 "bdev_nvme": { 00:05:41.646 "mask": "0x4000", 00:05:41.646 "tpoint_mask": "0x0" 00:05:41.646 }, 00:05:41.646 "blobfs": { 00:05:41.646 "mask": "0x80", 00:05:41.646 "tpoint_mask": "0x0" 00:05:41.646 }, 00:05:41.646 "dsa": { 00:05:41.646 "mask": "0x200", 00:05:41.646 "tpoint_mask": "0x0" 00:05:41.646 }, 00:05:41.646 "ftl": { 00:05:41.646 "mask": "0x40", 00:05:41.646 "tpoint_mask": "0x0" 00:05:41.646 }, 00:05:41.646 "iaa": { 00:05:41.646 "mask": "0x1000", 00:05:41.646 "tpoint_mask": "0x0" 00:05:41.646 }, 00:05:41.646 "iscsi_conn": { 00:05:41.646 "mask": "0x2", 00:05:41.646 "tpoint_mask": "0x0" 00:05:41.646 }, 00:05:41.646 "nvme_pcie": { 00:05:41.646 "mask": "0x800", 00:05:41.646 "tpoint_mask": "0x0" 00:05:41.646 }, 00:05:41.646 "nvme_tcp": { 00:05:41.646 "mask": "0x2000", 00:05:41.646 "tpoint_mask": "0x0" 00:05:41.646 }, 00:05:41.646 "nvmf_rdma": { 00:05:41.646 "mask": "0x10", 00:05:41.646 "tpoint_mask": "0x0" 00:05:41.646 }, 00:05:41.646 "nvmf_tcp": { 00:05:41.646 "mask": "0x20", 00:05:41.646 "tpoint_mask": "0x0" 00:05:41.646 }, 00:05:41.646 "scsi": { 00:05:41.646 "mask": "0x4", 00:05:41.646 "tpoint_mask": "0x0" 00:05:41.646 }, 00:05:41.646 "thread": { 00:05:41.646 "mask": "0x400", 00:05:41.646 "tpoint_mask": "0x0" 00:05:41.646 }, 00:05:41.646 "tpoint_group_mask": "0x8", 00:05:41.646 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid67608" 00:05:41.646 }' 00:05:41.646 14:46:14 -- rpc/rpc.sh@43 -- # jq length 00:05:41.646 14:46:14 -- rpc/rpc.sh@43 -- # '[' 15 -gt 2 ']' 00:05:41.646 14:46:14 -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:05:41.646 14:46:14 -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:05:41.646 14:46:14 -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:05:41.646 14:46:14 -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:05:41.646 14:46:14 -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:05:41.905 14:46:14 -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:05:41.905 14:46:14 -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:05:41.905 14:46:14 -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:05:41.905 00:05:41.905 real 0m0.285s 00:05:41.905 user 0m0.248s 00:05:41.905 sys 0m0.025s 00:05:41.905 14:46:14 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:41.905 ************************************ 00:05:41.905 END TEST rpc_trace_cmd_test 00:05:41.905 14:46:14 -- common/autotest_common.sh@10 -- # set +x 00:05:41.905 ************************************ 00:05:41.905 14:46:14 -- rpc/rpc.sh@76 -- # [[ 1 -eq 1 ]] 00:05:41.905 14:46:14 -- rpc/rpc.sh@77 -- # run_test go_rpc go_rpc 00:05:41.905 14:46:14 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:41.905 14:46:14 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:41.905 14:46:14 -- common/autotest_common.sh@10 -- # set +x 00:05:41.905 ************************************ 00:05:41.905 START TEST go_rpc 00:05:41.905 ************************************ 00:05:41.905 14:46:14 -- common/autotest_common.sh@1114 -- # go_rpc 00:05:41.905 14:46:14 -- rpc/rpc.sh@51 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_gorpc 00:05:41.905 14:46:14 -- rpc/rpc.sh@51 -- # bdevs='[]' 00:05:41.905 14:46:14 -- rpc/rpc.sh@52 -- # jq length 00:05:41.905 14:46:14 -- rpc/rpc.sh@52 -- # '[' 0 == 0 ']' 00:05:41.905 14:46:14 -- rpc/rpc.sh@54 -- # rpc_cmd bdev_malloc_create 8 512 00:05:41.905 14:46:14 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:41.905 14:46:14 -- common/autotest_common.sh@10 -- # set +x 00:05:41.905 14:46:14 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:41.905 14:46:14 -- rpc/rpc.sh@54 -- # malloc=Malloc2 00:05:41.905 14:46:14 -- rpc/rpc.sh@56 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_gorpc 00:05:41.905 14:46:15 -- rpc/rpc.sh@56 -- # bdevs='[{"aliases":["c28f2c42-bd69-4cfd-be42-dbc17157940e"],"assigned_rate_limits":{"r_mbytes_per_sec":0,"rw_ios_per_sec":0,"rw_mbytes_per_sec":0,"w_mbytes_per_sec":0},"block_size":512,"claimed":false,"driver_specific":{},"memory_domains":[{"dma_device_id":"SPDK_ACCEL_DMA_DEVICE","dma_device_type":2}],"name":"Malloc2","num_blocks":16384,"product_name":"Malloc disk","supported_io_types":{"abort":true,"compare":false,"compare_and_write":false,"flush":true,"nvme_admin":false,"nvme_io":false,"read":true,"reset":true,"unmap":true,"write":true,"write_zeroes":true},"uuid":"c28f2c42-bd69-4cfd-be42-dbc17157940e","zoned":false}]' 00:05:41.905 14:46:15 -- rpc/rpc.sh@57 -- # jq length 00:05:42.165 14:46:15 -- rpc/rpc.sh@57 -- # '[' 1 == 1 ']' 00:05:42.165 14:46:15 -- rpc/rpc.sh@59 -- # rpc_cmd bdev_malloc_delete Malloc2 00:05:42.165 14:46:15 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:42.165 14:46:15 -- common/autotest_common.sh@10 -- # set +x 00:05:42.165 14:46:15 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:42.165 14:46:15 -- rpc/rpc.sh@60 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_gorpc 00:05:42.165 14:46:15 -- rpc/rpc.sh@60 -- # bdevs='[]' 00:05:42.165 14:46:15 -- rpc/rpc.sh@61 -- # jq length 00:05:42.165 14:46:15 -- rpc/rpc.sh@61 -- # '[' 0 == 0 ']' 00:05:42.165 00:05:42.165 real 0m0.229s 00:05:42.165 user 0m0.151s 00:05:42.165 sys 0m0.036s 00:05:42.165 14:46:15 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:42.165 ************************************ 00:05:42.165 14:46:15 -- common/autotest_common.sh@10 -- # set +x 00:05:42.165 END TEST go_rpc 00:05:42.165 ************************************ 00:05:42.165 14:46:15 -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:05:42.165 14:46:15 -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:05:42.165 14:46:15 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:42.165 14:46:15 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:42.165 14:46:15 -- common/autotest_common.sh@10 -- # set +x 00:05:42.165 ************************************ 00:05:42.165 START TEST rpc_daemon_integrity 00:05:42.165 ************************************ 00:05:42.165 14:46:15 -- common/autotest_common.sh@1114 -- # rpc_integrity 00:05:42.165 14:46:15 -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:05:42.165 14:46:15 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:42.165 14:46:15 -- common/autotest_common.sh@10 -- # set +x 00:05:42.165 14:46:15 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:42.165 14:46:15 -- rpc/rpc.sh@12 -- # bdevs='[]' 00:05:42.165 14:46:15 -- rpc/rpc.sh@13 -- # jq length 00:05:42.165 14:46:15 -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:05:42.165 14:46:15 -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:05:42.165 14:46:15 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:42.165 14:46:15 -- common/autotest_common.sh@10 -- # set +x 00:05:42.165 14:46:15 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:42.165 14:46:15 -- rpc/rpc.sh@15 -- # malloc=Malloc3 00:05:42.165 14:46:15 -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:05:42.165 14:46:15 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:42.165 14:46:15 -- common/autotest_common.sh@10 -- # set +x 00:05:42.424 14:46:15 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:42.424 14:46:15 -- rpc/rpc.sh@16 -- # bdevs='[ 00:05:42.424 { 00:05:42.424 "aliases": [ 00:05:42.424 "a9548a3b-ce01-4771-b0f3-7a4dc7056a4a" 00:05:42.424 ], 00:05:42.424 "assigned_rate_limits": { 00:05:42.424 "r_mbytes_per_sec": 0, 00:05:42.424 "rw_ios_per_sec": 0, 00:05:42.424 "rw_mbytes_per_sec": 0, 00:05:42.424 "w_mbytes_per_sec": 0 00:05:42.424 }, 00:05:42.424 "block_size": 512, 00:05:42.424 "claimed": false, 00:05:42.424 "driver_specific": {}, 00:05:42.424 "memory_domains": [ 00:05:42.424 { 00:05:42.424 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:42.424 "dma_device_type": 2 00:05:42.424 } 00:05:42.424 ], 00:05:42.424 "name": "Malloc3", 00:05:42.424 "num_blocks": 16384, 00:05:42.424 "product_name": "Malloc disk", 00:05:42.424 "supported_io_types": { 00:05:42.424 "abort": true, 00:05:42.424 "compare": false, 00:05:42.424 "compare_and_write": false, 00:05:42.424 "flush": true, 00:05:42.424 "nvme_admin": false, 00:05:42.424 "nvme_io": false, 00:05:42.424 "read": true, 00:05:42.424 "reset": true, 00:05:42.424 "unmap": true, 00:05:42.424 "write": true, 00:05:42.424 "write_zeroes": true 00:05:42.424 }, 00:05:42.424 "uuid": "a9548a3b-ce01-4771-b0f3-7a4dc7056a4a", 00:05:42.424 "zoned": false 00:05:42.424 } 00:05:42.424 ]' 00:05:42.424 14:46:15 -- rpc/rpc.sh@17 -- # jq length 00:05:42.424 14:46:15 -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:05:42.424 14:46:15 -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc3 -p Passthru0 00:05:42.424 14:46:15 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:42.424 14:46:15 -- common/autotest_common.sh@10 -- # set +x 00:05:42.424 [2024-12-01 14:46:15.336099] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:05:42.424 [2024-12-01 14:46:15.336140] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:05:42.424 [2024-12-01 14:46:15.336153] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1377990 00:05:42.424 [2024-12-01 14:46:15.336160] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:05:42.424 [2024-12-01 14:46:15.337164] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:05:42.424 [2024-12-01 14:46:15.337186] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:05:42.424 Passthru0 00:05:42.424 14:46:15 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:42.424 14:46:15 -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:05:42.424 14:46:15 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:42.424 14:46:15 -- common/autotest_common.sh@10 -- # set +x 00:05:42.424 14:46:15 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:42.424 14:46:15 -- rpc/rpc.sh@20 -- # bdevs='[ 00:05:42.424 { 00:05:42.424 "aliases": [ 00:05:42.424 "a9548a3b-ce01-4771-b0f3-7a4dc7056a4a" 00:05:42.424 ], 00:05:42.424 "assigned_rate_limits": { 00:05:42.424 "r_mbytes_per_sec": 0, 00:05:42.424 "rw_ios_per_sec": 0, 00:05:42.424 "rw_mbytes_per_sec": 0, 00:05:42.424 "w_mbytes_per_sec": 0 00:05:42.424 }, 00:05:42.424 "block_size": 512, 00:05:42.424 "claim_type": "exclusive_write", 00:05:42.424 "claimed": true, 00:05:42.424 "driver_specific": {}, 00:05:42.424 "memory_domains": [ 00:05:42.424 { 00:05:42.424 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:42.424 "dma_device_type": 2 00:05:42.424 } 00:05:42.424 ], 00:05:42.424 "name": "Malloc3", 00:05:42.424 "num_blocks": 16384, 00:05:42.424 "product_name": "Malloc disk", 00:05:42.424 "supported_io_types": { 00:05:42.424 "abort": true, 00:05:42.424 "compare": false, 00:05:42.424 "compare_and_write": false, 00:05:42.424 "flush": true, 00:05:42.424 "nvme_admin": false, 00:05:42.424 "nvme_io": false, 00:05:42.424 "read": true, 00:05:42.424 "reset": true, 00:05:42.424 "unmap": true, 00:05:42.424 "write": true, 00:05:42.424 "write_zeroes": true 00:05:42.424 }, 00:05:42.424 "uuid": "a9548a3b-ce01-4771-b0f3-7a4dc7056a4a", 00:05:42.424 "zoned": false 00:05:42.424 }, 00:05:42.424 { 00:05:42.424 "aliases": [ 00:05:42.424 "ded4a7f0-4bee-56f7-b63a-8eb2f541a202" 00:05:42.424 ], 00:05:42.424 "assigned_rate_limits": { 00:05:42.424 "r_mbytes_per_sec": 0, 00:05:42.424 "rw_ios_per_sec": 0, 00:05:42.424 "rw_mbytes_per_sec": 0, 00:05:42.424 "w_mbytes_per_sec": 0 00:05:42.424 }, 00:05:42.424 "block_size": 512, 00:05:42.424 "claimed": false, 00:05:42.424 "driver_specific": { 00:05:42.424 "passthru": { 00:05:42.424 "base_bdev_name": "Malloc3", 00:05:42.424 "name": "Passthru0" 00:05:42.424 } 00:05:42.424 }, 00:05:42.424 "memory_domains": [ 00:05:42.424 { 00:05:42.424 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:42.424 "dma_device_type": 2 00:05:42.424 } 00:05:42.424 ], 00:05:42.424 "name": "Passthru0", 00:05:42.424 "num_blocks": 16384, 00:05:42.424 "product_name": "passthru", 00:05:42.424 "supported_io_types": { 00:05:42.424 "abort": true, 00:05:42.424 "compare": false, 00:05:42.424 "compare_and_write": false, 00:05:42.424 "flush": true, 00:05:42.424 "nvme_admin": false, 00:05:42.424 "nvme_io": false, 00:05:42.424 "read": true, 00:05:42.424 "reset": true, 00:05:42.424 "unmap": true, 00:05:42.424 "write": true, 00:05:42.424 "write_zeroes": true 00:05:42.424 }, 00:05:42.424 "uuid": "ded4a7f0-4bee-56f7-b63a-8eb2f541a202", 00:05:42.424 "zoned": false 00:05:42.424 } 00:05:42.424 ]' 00:05:42.424 14:46:15 -- rpc/rpc.sh@21 -- # jq length 00:05:42.424 14:46:15 -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:05:42.424 14:46:15 -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:05:42.424 14:46:15 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:42.424 14:46:15 -- common/autotest_common.sh@10 -- # set +x 00:05:42.424 14:46:15 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:42.424 14:46:15 -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc3 00:05:42.424 14:46:15 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:42.424 14:46:15 -- common/autotest_common.sh@10 -- # set +x 00:05:42.424 14:46:15 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:42.424 14:46:15 -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:05:42.424 14:46:15 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:42.424 14:46:15 -- common/autotest_common.sh@10 -- # set +x 00:05:42.424 14:46:15 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:42.424 14:46:15 -- rpc/rpc.sh@25 -- # bdevs='[]' 00:05:42.424 14:46:15 -- rpc/rpc.sh@26 -- # jq length 00:05:42.424 14:46:15 -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:05:42.424 00:05:42.424 real 0m0.312s 00:05:42.424 user 0m0.211s 00:05:42.424 sys 0m0.034s 00:05:42.424 14:46:15 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:42.424 ************************************ 00:05:42.424 END TEST rpc_daemon_integrity 00:05:42.424 14:46:15 -- common/autotest_common.sh@10 -- # set +x 00:05:42.424 ************************************ 00:05:42.689 14:46:15 -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:05:42.689 14:46:15 -- rpc/rpc.sh@84 -- # killprocess 67608 00:05:42.689 14:46:15 -- common/autotest_common.sh@936 -- # '[' -z 67608 ']' 00:05:42.689 14:46:15 -- common/autotest_common.sh@940 -- # kill -0 67608 00:05:42.689 14:46:15 -- common/autotest_common.sh@941 -- # uname 00:05:42.689 14:46:15 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:05:42.689 14:46:15 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 67608 00:05:42.689 14:46:15 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:05:42.689 14:46:15 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:05:42.689 14:46:15 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 67608' 00:05:42.689 killing process with pid 67608 00:05:42.689 14:46:15 -- common/autotest_common.sh@955 -- # kill 67608 00:05:42.689 14:46:15 -- common/autotest_common.sh@960 -- # wait 67608 00:05:43.007 00:05:43.007 real 0m3.183s 00:05:43.007 user 0m4.192s 00:05:43.007 sys 0m0.778s 00:05:43.007 14:46:15 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:43.007 ************************************ 00:05:43.007 14:46:15 -- common/autotest_common.sh@10 -- # set +x 00:05:43.007 END TEST rpc 00:05:43.007 ************************************ 00:05:43.008 14:46:15 -- spdk/autotest.sh@164 -- # run_test rpc_client /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:05:43.008 14:46:15 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:43.008 14:46:15 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:43.008 14:46:15 -- common/autotest_common.sh@10 -- # set +x 00:05:43.008 ************************************ 00:05:43.008 START TEST rpc_client 00:05:43.008 ************************************ 00:05:43.008 14:46:15 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:05:43.008 * Looking for test storage... 00:05:43.008 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc_client 00:05:43.008 14:46:16 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:05:43.008 14:46:16 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:05:43.008 14:46:16 -- common/autotest_common.sh@1690 -- # lcov --version 00:05:43.300 14:46:16 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:05:43.300 14:46:16 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:05:43.300 14:46:16 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:05:43.300 14:46:16 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:05:43.300 14:46:16 -- scripts/common.sh@335 -- # IFS=.-: 00:05:43.300 14:46:16 -- scripts/common.sh@335 -- # read -ra ver1 00:05:43.300 14:46:16 -- scripts/common.sh@336 -- # IFS=.-: 00:05:43.300 14:46:16 -- scripts/common.sh@336 -- # read -ra ver2 00:05:43.300 14:46:16 -- scripts/common.sh@337 -- # local 'op=<' 00:05:43.300 14:46:16 -- scripts/common.sh@339 -- # ver1_l=2 00:05:43.300 14:46:16 -- scripts/common.sh@340 -- # ver2_l=1 00:05:43.300 14:46:16 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:05:43.300 14:46:16 -- scripts/common.sh@343 -- # case "$op" in 00:05:43.300 14:46:16 -- scripts/common.sh@344 -- # : 1 00:05:43.300 14:46:16 -- scripts/common.sh@363 -- # (( v = 0 )) 00:05:43.300 14:46:16 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:43.300 14:46:16 -- scripts/common.sh@364 -- # decimal 1 00:05:43.300 14:46:16 -- scripts/common.sh@352 -- # local d=1 00:05:43.300 14:46:16 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:43.300 14:46:16 -- scripts/common.sh@354 -- # echo 1 00:05:43.300 14:46:16 -- scripts/common.sh@364 -- # ver1[v]=1 00:05:43.300 14:46:16 -- scripts/common.sh@365 -- # decimal 2 00:05:43.300 14:46:16 -- scripts/common.sh@352 -- # local d=2 00:05:43.300 14:46:16 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:43.300 14:46:16 -- scripts/common.sh@354 -- # echo 2 00:05:43.300 14:46:16 -- scripts/common.sh@365 -- # ver2[v]=2 00:05:43.300 14:46:16 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:05:43.300 14:46:16 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:05:43.300 14:46:16 -- scripts/common.sh@367 -- # return 0 00:05:43.300 14:46:16 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:43.300 14:46:16 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:05:43.300 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:43.300 --rc genhtml_branch_coverage=1 00:05:43.300 --rc genhtml_function_coverage=1 00:05:43.300 --rc genhtml_legend=1 00:05:43.300 --rc geninfo_all_blocks=1 00:05:43.300 --rc geninfo_unexecuted_blocks=1 00:05:43.300 00:05:43.300 ' 00:05:43.300 14:46:16 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:05:43.300 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:43.300 --rc genhtml_branch_coverage=1 00:05:43.300 --rc genhtml_function_coverage=1 00:05:43.300 --rc genhtml_legend=1 00:05:43.300 --rc geninfo_all_blocks=1 00:05:43.300 --rc geninfo_unexecuted_blocks=1 00:05:43.300 00:05:43.300 ' 00:05:43.300 14:46:16 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:05:43.300 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:43.300 --rc genhtml_branch_coverage=1 00:05:43.300 --rc genhtml_function_coverage=1 00:05:43.300 --rc genhtml_legend=1 00:05:43.300 --rc geninfo_all_blocks=1 00:05:43.300 --rc geninfo_unexecuted_blocks=1 00:05:43.300 00:05:43.300 ' 00:05:43.300 14:46:16 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:05:43.300 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:43.300 --rc genhtml_branch_coverage=1 00:05:43.300 --rc genhtml_function_coverage=1 00:05:43.300 --rc genhtml_legend=1 00:05:43.300 --rc geninfo_all_blocks=1 00:05:43.300 --rc geninfo_unexecuted_blocks=1 00:05:43.300 00:05:43.300 ' 00:05:43.300 14:46:16 -- rpc_client/rpc_client.sh@10 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client_test 00:05:43.300 OK 00:05:43.300 14:46:16 -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:05:43.300 00:05:43.300 real 0m0.211s 00:05:43.300 user 0m0.127s 00:05:43.300 sys 0m0.093s 00:05:43.300 14:46:16 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:43.300 14:46:16 -- common/autotest_common.sh@10 -- # set +x 00:05:43.300 ************************************ 00:05:43.300 END TEST rpc_client 00:05:43.300 ************************************ 00:05:43.300 14:46:16 -- spdk/autotest.sh@165 -- # run_test json_config /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:05:43.300 14:46:16 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:43.300 14:46:16 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:43.300 14:46:16 -- common/autotest_common.sh@10 -- # set +x 00:05:43.300 ************************************ 00:05:43.300 START TEST json_config 00:05:43.300 ************************************ 00:05:43.300 14:46:16 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:05:43.300 14:46:16 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:05:43.300 14:46:16 -- common/autotest_common.sh@1690 -- # lcov --version 00:05:43.300 14:46:16 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:05:43.300 14:46:16 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:05:43.300 14:46:16 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:05:43.300 14:46:16 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:05:43.300 14:46:16 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:05:43.300 14:46:16 -- scripts/common.sh@335 -- # IFS=.-: 00:05:43.300 14:46:16 -- scripts/common.sh@335 -- # read -ra ver1 00:05:43.300 14:46:16 -- scripts/common.sh@336 -- # IFS=.-: 00:05:43.300 14:46:16 -- scripts/common.sh@336 -- # read -ra ver2 00:05:43.300 14:46:16 -- scripts/common.sh@337 -- # local 'op=<' 00:05:43.300 14:46:16 -- scripts/common.sh@339 -- # ver1_l=2 00:05:43.300 14:46:16 -- scripts/common.sh@340 -- # ver2_l=1 00:05:43.300 14:46:16 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:05:43.300 14:46:16 -- scripts/common.sh@343 -- # case "$op" in 00:05:43.300 14:46:16 -- scripts/common.sh@344 -- # : 1 00:05:43.300 14:46:16 -- scripts/common.sh@363 -- # (( v = 0 )) 00:05:43.300 14:46:16 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:43.300 14:46:16 -- scripts/common.sh@364 -- # decimal 1 00:05:43.300 14:46:16 -- scripts/common.sh@352 -- # local d=1 00:05:43.300 14:46:16 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:43.300 14:46:16 -- scripts/common.sh@354 -- # echo 1 00:05:43.300 14:46:16 -- scripts/common.sh@364 -- # ver1[v]=1 00:05:43.300 14:46:16 -- scripts/common.sh@365 -- # decimal 2 00:05:43.300 14:46:16 -- scripts/common.sh@352 -- # local d=2 00:05:43.300 14:46:16 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:43.300 14:46:16 -- scripts/common.sh@354 -- # echo 2 00:05:43.300 14:46:16 -- scripts/common.sh@365 -- # ver2[v]=2 00:05:43.300 14:46:16 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:05:43.300 14:46:16 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:05:43.300 14:46:16 -- scripts/common.sh@367 -- # return 0 00:05:43.300 14:46:16 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:43.300 14:46:16 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:05:43.300 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:43.301 --rc genhtml_branch_coverage=1 00:05:43.301 --rc genhtml_function_coverage=1 00:05:43.301 --rc genhtml_legend=1 00:05:43.301 --rc geninfo_all_blocks=1 00:05:43.301 --rc geninfo_unexecuted_blocks=1 00:05:43.301 00:05:43.301 ' 00:05:43.301 14:46:16 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:05:43.301 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:43.301 --rc genhtml_branch_coverage=1 00:05:43.301 --rc genhtml_function_coverage=1 00:05:43.301 --rc genhtml_legend=1 00:05:43.301 --rc geninfo_all_blocks=1 00:05:43.301 --rc geninfo_unexecuted_blocks=1 00:05:43.301 00:05:43.301 ' 00:05:43.301 14:46:16 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:05:43.301 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:43.301 --rc genhtml_branch_coverage=1 00:05:43.301 --rc genhtml_function_coverage=1 00:05:43.301 --rc genhtml_legend=1 00:05:43.301 --rc geninfo_all_blocks=1 00:05:43.301 --rc geninfo_unexecuted_blocks=1 00:05:43.301 00:05:43.301 ' 00:05:43.301 14:46:16 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:05:43.301 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:43.301 --rc genhtml_branch_coverage=1 00:05:43.301 --rc genhtml_function_coverage=1 00:05:43.301 --rc genhtml_legend=1 00:05:43.301 --rc geninfo_all_blocks=1 00:05:43.301 --rc geninfo_unexecuted_blocks=1 00:05:43.301 00:05:43.301 ' 00:05:43.301 14:46:16 -- json_config/json_config.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:05:43.301 14:46:16 -- nvmf/common.sh@7 -- # uname -s 00:05:43.301 14:46:16 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:43.301 14:46:16 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:43.301 14:46:16 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:43.301 14:46:16 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:43.301 14:46:16 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:43.301 14:46:16 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:43.301 14:46:16 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:43.301 14:46:16 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:43.301 14:46:16 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:43.301 14:46:16 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:43.301 14:46:16 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:2d843004-a791-47f3-8dd7-3d04462c368b 00:05:43.301 14:46:16 -- nvmf/common.sh@18 -- # NVME_HOSTID=2d843004-a791-47f3-8dd7-3d04462c368b 00:05:43.301 14:46:16 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:43.301 14:46:16 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:43.301 14:46:16 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:43.301 14:46:16 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:05:43.301 14:46:16 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:43.301 14:46:16 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:43.301 14:46:16 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:43.301 14:46:16 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:43.301 14:46:16 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:43.301 14:46:16 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:43.301 14:46:16 -- paths/export.sh@5 -- # export PATH 00:05:43.301 14:46:16 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:43.301 14:46:16 -- nvmf/common.sh@46 -- # : 0 00:05:43.301 14:46:16 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:05:43.301 14:46:16 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:05:43.301 14:46:16 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:05:43.301 14:46:16 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:43.301 14:46:16 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:43.301 14:46:16 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:05:43.301 14:46:16 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:05:43.301 14:46:16 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:05:43.301 14:46:16 -- json_config/json_config.sh@10 -- # [[ 0 -eq 1 ]] 00:05:43.301 14:46:16 -- json_config/json_config.sh@14 -- # [[ 0 -ne 1 ]] 00:05:43.301 14:46:16 -- json_config/json_config.sh@14 -- # [[ 0 -eq 1 ]] 00:05:43.301 14:46:16 -- json_config/json_config.sh@25 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:05:43.301 14:46:16 -- json_config/json_config.sh@30 -- # app_pid=(['target']='' ['initiator']='') 00:05:43.301 14:46:16 -- json_config/json_config.sh@30 -- # declare -A app_pid 00:05:43.301 14:46:16 -- json_config/json_config.sh@31 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:05:43.301 14:46:16 -- json_config/json_config.sh@31 -- # declare -A app_socket 00:05:43.301 14:46:16 -- json_config/json_config.sh@32 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:05:43.301 14:46:16 -- json_config/json_config.sh@32 -- # declare -A app_params 00:05:43.301 14:46:16 -- json_config/json_config.sh@33 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/spdk_tgt_config.json' ['initiator']='/home/vagrant/spdk_repo/spdk/spdk_initiator_config.json') 00:05:43.301 14:46:16 -- json_config/json_config.sh@33 -- # declare -A configs_path 00:05:43.301 14:46:16 -- json_config/json_config.sh@43 -- # last_event_id=0 00:05:43.560 14:46:16 -- json_config/json_config.sh@418 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:05:43.560 INFO: JSON configuration test init 00:05:43.560 14:46:16 -- json_config/json_config.sh@419 -- # echo 'INFO: JSON configuration test init' 00:05:43.560 14:46:16 -- json_config/json_config.sh@420 -- # json_config_test_init 00:05:43.560 14:46:16 -- json_config/json_config.sh@315 -- # timing_enter json_config_test_init 00:05:43.560 14:46:16 -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:43.560 14:46:16 -- common/autotest_common.sh@10 -- # set +x 00:05:43.560 14:46:16 -- json_config/json_config.sh@316 -- # timing_enter json_config_setup_target 00:05:43.560 14:46:16 -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:43.560 14:46:16 -- common/autotest_common.sh@10 -- # set +x 00:05:43.560 14:46:16 -- json_config/json_config.sh@318 -- # json_config_test_start_app target --wait-for-rpc 00:05:43.560 14:46:16 -- json_config/json_config.sh@98 -- # local app=target 00:05:43.560 14:46:16 -- json_config/json_config.sh@99 -- # shift 00:05:43.560 14:46:16 -- json_config/json_config.sh@101 -- # [[ -n 22 ]] 00:05:43.560 14:46:16 -- json_config/json_config.sh@102 -- # [[ -z '' ]] 00:05:43.560 14:46:16 -- json_config/json_config.sh@104 -- # local app_extra_params= 00:05:43.560 14:46:16 -- json_config/json_config.sh@105 -- # [[ 0 -eq 1 ]] 00:05:43.560 14:46:16 -- json_config/json_config.sh@105 -- # [[ 0 -eq 1 ]] 00:05:43.560 14:46:16 -- json_config/json_config.sh@111 -- # app_pid[$app]=67929 00:05:43.560 14:46:16 -- json_config/json_config.sh@110 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:05:43.560 Waiting for target to run... 00:05:43.560 14:46:16 -- json_config/json_config.sh@113 -- # echo 'Waiting for target to run...' 00:05:43.560 14:46:16 -- json_config/json_config.sh@114 -- # waitforlisten 67929 /var/tmp/spdk_tgt.sock 00:05:43.560 14:46:16 -- common/autotest_common.sh@829 -- # '[' -z 67929 ']' 00:05:43.561 14:46:16 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:43.561 14:46:16 -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:43.561 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:43.561 14:46:16 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:43.561 14:46:16 -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:43.561 14:46:16 -- common/autotest_common.sh@10 -- # set +x 00:05:43.561 [2024-12-01 14:46:16.489526] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:05:43.561 [2024-12-01 14:46:16.489640] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67929 ] 00:05:44.129 [2024-12-01 14:46:17.050379] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:44.129 [2024-12-01 14:46:17.115859] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:44.129 [2024-12-01 14:46:17.115997] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:44.388 14:46:17 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:44.388 14:46:17 -- common/autotest_common.sh@862 -- # return 0 00:05:44.388 00:05:44.388 14:46:17 -- json_config/json_config.sh@115 -- # echo '' 00:05:44.388 14:46:17 -- json_config/json_config.sh@322 -- # create_accel_config 00:05:44.388 14:46:17 -- json_config/json_config.sh@146 -- # timing_enter create_accel_config 00:05:44.388 14:46:17 -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:44.388 14:46:17 -- common/autotest_common.sh@10 -- # set +x 00:05:44.388 14:46:17 -- json_config/json_config.sh@148 -- # [[ 0 -eq 1 ]] 00:05:44.388 14:46:17 -- json_config/json_config.sh@154 -- # timing_exit create_accel_config 00:05:44.388 14:46:17 -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:44.388 14:46:17 -- common/autotest_common.sh@10 -- # set +x 00:05:44.646 14:46:17 -- json_config/json_config.sh@326 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:05:44.646 14:46:17 -- json_config/json_config.sh@327 -- # tgt_rpc load_config 00:05:44.646 14:46:17 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:05:44.905 14:46:17 -- json_config/json_config.sh@329 -- # tgt_check_notification_types 00:05:44.905 14:46:17 -- json_config/json_config.sh@46 -- # timing_enter tgt_check_notification_types 00:05:44.905 14:46:17 -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:44.905 14:46:17 -- common/autotest_common.sh@10 -- # set +x 00:05:44.905 14:46:17 -- json_config/json_config.sh@48 -- # local ret=0 00:05:44.905 14:46:17 -- json_config/json_config.sh@49 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:05:44.905 14:46:17 -- json_config/json_config.sh@49 -- # local enabled_types 00:05:44.905 14:46:17 -- json_config/json_config.sh@51 -- # tgt_rpc notify_get_types 00:05:44.905 14:46:17 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:05:44.905 14:46:17 -- json_config/json_config.sh@51 -- # jq -r '.[]' 00:05:45.164 14:46:18 -- json_config/json_config.sh@51 -- # get_types=('bdev_register' 'bdev_unregister') 00:05:45.164 14:46:18 -- json_config/json_config.sh@51 -- # local get_types 00:05:45.164 14:46:18 -- json_config/json_config.sh@52 -- # [[ bdev_register bdev_unregister != \b\d\e\v\_\r\e\g\i\s\t\e\r\ \b\d\e\v\_\u\n\r\e\g\i\s\t\e\r ]] 00:05:45.164 14:46:18 -- json_config/json_config.sh@57 -- # timing_exit tgt_check_notification_types 00:05:45.164 14:46:18 -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:45.164 14:46:18 -- common/autotest_common.sh@10 -- # set +x 00:05:45.164 14:46:18 -- json_config/json_config.sh@58 -- # return 0 00:05:45.164 14:46:18 -- json_config/json_config.sh@331 -- # [[ 0 -eq 1 ]] 00:05:45.164 14:46:18 -- json_config/json_config.sh@335 -- # [[ 0 -eq 1 ]] 00:05:45.164 14:46:18 -- json_config/json_config.sh@339 -- # [[ 0 -eq 1 ]] 00:05:45.164 14:46:18 -- json_config/json_config.sh@343 -- # [[ 1 -eq 1 ]] 00:05:45.164 14:46:18 -- json_config/json_config.sh@344 -- # create_nvmf_subsystem_config 00:05:45.164 14:46:18 -- json_config/json_config.sh@283 -- # timing_enter create_nvmf_subsystem_config 00:05:45.164 14:46:18 -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:45.164 14:46:18 -- common/autotest_common.sh@10 -- # set +x 00:05:45.164 14:46:18 -- json_config/json_config.sh@285 -- # NVMF_FIRST_TARGET_IP=127.0.0.1 00:05:45.164 14:46:18 -- json_config/json_config.sh@286 -- # [[ tcp == \r\d\m\a ]] 00:05:45.164 14:46:18 -- json_config/json_config.sh@290 -- # [[ -z 127.0.0.1 ]] 00:05:45.164 14:46:18 -- json_config/json_config.sh@295 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocForNvmf0 00:05:45.164 14:46:18 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocForNvmf0 00:05:45.423 MallocForNvmf0 00:05:45.423 14:46:18 -- json_config/json_config.sh@296 -- # tgt_rpc bdev_malloc_create 4 1024 --name MallocForNvmf1 00:05:45.423 14:46:18 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 4 1024 --name MallocForNvmf1 00:05:45.682 MallocForNvmf1 00:05:45.682 14:46:18 -- json_config/json_config.sh@298 -- # tgt_rpc nvmf_create_transport -t tcp -u 8192 -c 0 00:05:45.682 14:46:18 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_transport -t tcp -u 8192 -c 0 00:05:45.941 [2024-12-01 14:46:19.044273] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:46.200 14:46:19 -- json_config/json_config.sh@299 -- # tgt_rpc nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:05:46.200 14:46:19 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:05:46.459 14:46:19 -- json_config/json_config.sh@300 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:05:46.459 14:46:19 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:05:46.459 14:46:19 -- json_config/json_config.sh@301 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:05:46.459 14:46:19 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:05:46.717 14:46:19 -- json_config/json_config.sh@302 -- # tgt_rpc nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:05:46.717 14:46:19 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:05:46.976 [2024-12-01 14:46:19.924641] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:05:46.976 14:46:19 -- json_config/json_config.sh@304 -- # timing_exit create_nvmf_subsystem_config 00:05:46.976 14:46:19 -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:46.976 14:46:19 -- common/autotest_common.sh@10 -- # set +x 00:05:46.976 14:46:19 -- json_config/json_config.sh@346 -- # timing_exit json_config_setup_target 00:05:46.976 14:46:19 -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:46.976 14:46:19 -- common/autotest_common.sh@10 -- # set +x 00:05:46.976 14:46:20 -- json_config/json_config.sh@348 -- # [[ 0 -eq 1 ]] 00:05:46.976 14:46:20 -- json_config/json_config.sh@353 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:05:46.976 14:46:20 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:05:47.236 MallocBdevForConfigChangeCheck 00:05:47.236 14:46:20 -- json_config/json_config.sh@355 -- # timing_exit json_config_test_init 00:05:47.236 14:46:20 -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:47.236 14:46:20 -- common/autotest_common.sh@10 -- # set +x 00:05:47.236 14:46:20 -- json_config/json_config.sh@422 -- # tgt_rpc save_config 00:05:47.236 14:46:20 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:47.800 INFO: shutting down applications... 00:05:47.800 14:46:20 -- json_config/json_config.sh@424 -- # echo 'INFO: shutting down applications...' 00:05:47.800 14:46:20 -- json_config/json_config.sh@425 -- # [[ 0 -eq 1 ]] 00:05:47.800 14:46:20 -- json_config/json_config.sh@431 -- # json_config_clear target 00:05:47.800 14:46:20 -- json_config/json_config.sh@385 -- # [[ -n 22 ]] 00:05:47.800 14:46:20 -- json_config/json_config.sh@386 -- # /home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:05:48.056 Calling clear_iscsi_subsystem 00:05:48.056 Calling clear_nvmf_subsystem 00:05:48.056 Calling clear_nbd_subsystem 00:05:48.056 Calling clear_ublk_subsystem 00:05:48.056 Calling clear_vhost_blk_subsystem 00:05:48.056 Calling clear_vhost_scsi_subsystem 00:05:48.056 Calling clear_scheduler_subsystem 00:05:48.056 Calling clear_bdev_subsystem 00:05:48.056 Calling clear_accel_subsystem 00:05:48.056 Calling clear_vmd_subsystem 00:05:48.056 Calling clear_sock_subsystem 00:05:48.056 Calling clear_iobuf_subsystem 00:05:48.056 14:46:20 -- json_config/json_config.sh@390 -- # local config_filter=/home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py 00:05:48.056 14:46:20 -- json_config/json_config.sh@396 -- # count=100 00:05:48.056 14:46:20 -- json_config/json_config.sh@397 -- # '[' 100 -gt 0 ']' 00:05:48.056 14:46:20 -- json_config/json_config.sh@398 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:48.056 14:46:20 -- json_config/json_config.sh@398 -- # /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:05:48.056 14:46:20 -- json_config/json_config.sh@398 -- # /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method check_empty 00:05:48.314 14:46:21 -- json_config/json_config.sh@398 -- # break 00:05:48.314 14:46:21 -- json_config/json_config.sh@403 -- # '[' 100 -eq 0 ']' 00:05:48.314 14:46:21 -- json_config/json_config.sh@432 -- # json_config_test_shutdown_app target 00:05:48.314 14:46:21 -- json_config/json_config.sh@120 -- # local app=target 00:05:48.314 14:46:21 -- json_config/json_config.sh@123 -- # [[ -n 22 ]] 00:05:48.314 14:46:21 -- json_config/json_config.sh@124 -- # [[ -n 67929 ]] 00:05:48.314 14:46:21 -- json_config/json_config.sh@127 -- # kill -SIGINT 67929 00:05:48.314 14:46:21 -- json_config/json_config.sh@129 -- # (( i = 0 )) 00:05:48.314 14:46:21 -- json_config/json_config.sh@129 -- # (( i < 30 )) 00:05:48.314 14:46:21 -- json_config/json_config.sh@130 -- # kill -0 67929 00:05:48.314 14:46:21 -- json_config/json_config.sh@134 -- # sleep 0.5 00:05:48.883 14:46:21 -- json_config/json_config.sh@129 -- # (( i++ )) 00:05:48.883 14:46:21 -- json_config/json_config.sh@129 -- # (( i < 30 )) 00:05:48.883 14:46:21 -- json_config/json_config.sh@130 -- # kill -0 67929 00:05:48.883 14:46:21 -- json_config/json_config.sh@131 -- # app_pid[$app]= 00:05:48.883 14:46:21 -- json_config/json_config.sh@132 -- # break 00:05:48.883 14:46:21 -- json_config/json_config.sh@137 -- # [[ -n '' ]] 00:05:48.883 SPDK target shutdown done 00:05:48.883 14:46:21 -- json_config/json_config.sh@142 -- # echo 'SPDK target shutdown done' 00:05:48.883 INFO: relaunching applications... 00:05:48.883 14:46:21 -- json_config/json_config.sh@434 -- # echo 'INFO: relaunching applications...' 00:05:48.883 14:46:21 -- json_config/json_config.sh@435 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:05:48.883 14:46:21 -- json_config/json_config.sh@98 -- # local app=target 00:05:48.883 14:46:21 -- json_config/json_config.sh@99 -- # shift 00:05:48.883 14:46:21 -- json_config/json_config.sh@101 -- # [[ -n 22 ]] 00:05:48.883 14:46:21 -- json_config/json_config.sh@102 -- # [[ -z '' ]] 00:05:48.883 14:46:21 -- json_config/json_config.sh@104 -- # local app_extra_params= 00:05:48.883 14:46:21 -- json_config/json_config.sh@105 -- # [[ 0 -eq 1 ]] 00:05:48.883 14:46:21 -- json_config/json_config.sh@105 -- # [[ 0 -eq 1 ]] 00:05:48.883 14:46:21 -- json_config/json_config.sh@111 -- # app_pid[$app]=68204 00:05:48.883 14:46:21 -- json_config/json_config.sh@113 -- # echo 'Waiting for target to run...' 00:05:48.883 Waiting for target to run... 00:05:48.883 14:46:21 -- json_config/json_config.sh@110 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:05:48.883 14:46:21 -- json_config/json_config.sh@114 -- # waitforlisten 68204 /var/tmp/spdk_tgt.sock 00:05:48.883 14:46:21 -- common/autotest_common.sh@829 -- # '[' -z 68204 ']' 00:05:48.883 14:46:21 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:48.883 14:46:21 -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:48.883 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:48.883 14:46:21 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:48.883 14:46:21 -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:48.883 14:46:21 -- common/autotest_common.sh@10 -- # set +x 00:05:48.883 [2024-12-01 14:46:21.908299] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:05:48.883 [2024-12-01 14:46:21.908416] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68204 ] 00:05:49.451 [2024-12-01 14:46:22.426323] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:49.451 [2024-12-01 14:46:22.496480] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:49.451 [2024-12-01 14:46:22.496625] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:49.710 [2024-12-01 14:46:22.792252] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:49.969 [2024-12-01 14:46:22.824377] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:05:50.536 14:46:23 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:50.537 14:46:23 -- common/autotest_common.sh@862 -- # return 0 00:05:50.537 14:46:23 -- json_config/json_config.sh@115 -- # echo '' 00:05:50.537 00:05:50.537 14:46:23 -- json_config/json_config.sh@436 -- # [[ 0 -eq 1 ]] 00:05:50.537 INFO: Checking if target configuration is the same... 00:05:50.537 14:46:23 -- json_config/json_config.sh@440 -- # echo 'INFO: Checking if target configuration is the same...' 00:05:50.537 14:46:23 -- json_config/json_config.sh@441 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh /dev/fd/62 /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:05:50.537 14:46:23 -- json_config/json_config.sh@441 -- # tgt_rpc save_config 00:05:50.537 14:46:23 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:50.537 + '[' 2 -ne 2 ']' 00:05:50.537 +++ dirname /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh 00:05:50.537 ++ readlink -f /home/vagrant/spdk_repo/spdk/test/json_config/../.. 00:05:50.537 + rootdir=/home/vagrant/spdk_repo/spdk 00:05:50.537 +++ basename /dev/fd/62 00:05:50.537 ++ mktemp /tmp/62.XXX 00:05:50.537 + tmp_file_1=/tmp/62.0eL 00:05:50.537 +++ basename /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:05:50.537 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:05:50.537 + tmp_file_2=/tmp/spdk_tgt_config.json.AdU 00:05:50.537 + ret=0 00:05:50.537 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:05:51.105 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:05:51.105 + diff -u /tmp/62.0eL /tmp/spdk_tgt_config.json.AdU 00:05:51.105 INFO: JSON config files are the same 00:05:51.105 + echo 'INFO: JSON config files are the same' 00:05:51.105 + rm /tmp/62.0eL /tmp/spdk_tgt_config.json.AdU 00:05:51.105 + exit 0 00:05:51.105 14:46:24 -- json_config/json_config.sh@442 -- # [[ 0 -eq 1 ]] 00:05:51.105 INFO: changing configuration and checking if this can be detected... 00:05:51.105 14:46:24 -- json_config/json_config.sh@447 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:05:51.105 14:46:24 -- json_config/json_config.sh@449 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:05:51.105 14:46:24 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:05:51.364 14:46:24 -- json_config/json_config.sh@450 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh /dev/fd/62 /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:05:51.364 14:46:24 -- json_config/json_config.sh@450 -- # tgt_rpc save_config 00:05:51.364 14:46:24 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:51.364 + '[' 2 -ne 2 ']' 00:05:51.364 +++ dirname /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh 00:05:51.364 ++ readlink -f /home/vagrant/spdk_repo/spdk/test/json_config/../.. 00:05:51.364 + rootdir=/home/vagrant/spdk_repo/spdk 00:05:51.364 +++ basename /dev/fd/62 00:05:51.364 ++ mktemp /tmp/62.XXX 00:05:51.364 + tmp_file_1=/tmp/62.HVd 00:05:51.364 +++ basename /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:05:51.364 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:05:51.364 + tmp_file_2=/tmp/spdk_tgt_config.json.nka 00:05:51.364 + ret=0 00:05:51.364 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:05:51.623 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:05:51.882 + diff -u /tmp/62.HVd /tmp/spdk_tgt_config.json.nka 00:05:51.882 + ret=1 00:05:51.882 + echo '=== Start of file: /tmp/62.HVd ===' 00:05:51.882 + cat /tmp/62.HVd 00:05:51.882 + echo '=== End of file: /tmp/62.HVd ===' 00:05:51.882 + echo '' 00:05:51.882 + echo '=== Start of file: /tmp/spdk_tgt_config.json.nka ===' 00:05:51.882 + cat /tmp/spdk_tgt_config.json.nka 00:05:51.882 + echo '=== End of file: /tmp/spdk_tgt_config.json.nka ===' 00:05:51.882 + echo '' 00:05:51.882 + rm /tmp/62.HVd /tmp/spdk_tgt_config.json.nka 00:05:51.882 + exit 1 00:05:51.882 INFO: configuration change detected. 00:05:51.882 14:46:24 -- json_config/json_config.sh@454 -- # echo 'INFO: configuration change detected.' 00:05:51.882 14:46:24 -- json_config/json_config.sh@457 -- # json_config_test_fini 00:05:51.882 14:46:24 -- json_config/json_config.sh@359 -- # timing_enter json_config_test_fini 00:05:51.882 14:46:24 -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:51.882 14:46:24 -- common/autotest_common.sh@10 -- # set +x 00:05:51.882 14:46:24 -- json_config/json_config.sh@360 -- # local ret=0 00:05:51.882 14:46:24 -- json_config/json_config.sh@362 -- # [[ -n '' ]] 00:05:51.882 14:46:24 -- json_config/json_config.sh@370 -- # [[ -n 68204 ]] 00:05:51.882 14:46:24 -- json_config/json_config.sh@373 -- # cleanup_bdev_subsystem_config 00:05:51.882 14:46:24 -- json_config/json_config.sh@237 -- # timing_enter cleanup_bdev_subsystem_config 00:05:51.882 14:46:24 -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:51.882 14:46:24 -- common/autotest_common.sh@10 -- # set +x 00:05:51.882 14:46:24 -- json_config/json_config.sh@239 -- # [[ 0 -eq 1 ]] 00:05:51.882 14:46:24 -- json_config/json_config.sh@246 -- # uname -s 00:05:51.883 14:46:24 -- json_config/json_config.sh@246 -- # [[ Linux = Linux ]] 00:05:51.883 14:46:24 -- json_config/json_config.sh@247 -- # rm -f /sample_aio 00:05:51.883 14:46:24 -- json_config/json_config.sh@250 -- # [[ 0 -eq 1 ]] 00:05:51.883 14:46:24 -- json_config/json_config.sh@254 -- # timing_exit cleanup_bdev_subsystem_config 00:05:51.883 14:46:24 -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:51.883 14:46:24 -- common/autotest_common.sh@10 -- # set +x 00:05:51.883 14:46:24 -- json_config/json_config.sh@376 -- # killprocess 68204 00:05:51.883 14:46:24 -- common/autotest_common.sh@936 -- # '[' -z 68204 ']' 00:05:51.883 14:46:24 -- common/autotest_common.sh@940 -- # kill -0 68204 00:05:51.883 14:46:24 -- common/autotest_common.sh@941 -- # uname 00:05:51.883 14:46:24 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:05:51.883 14:46:24 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 68204 00:05:51.883 14:46:24 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:05:51.883 14:46:24 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:05:51.883 14:46:24 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 68204' 00:05:51.883 killing process with pid 68204 00:05:51.883 14:46:24 -- common/autotest_common.sh@955 -- # kill 68204 00:05:51.883 14:46:24 -- common/autotest_common.sh@960 -- # wait 68204 00:05:52.148 14:46:25 -- json_config/json_config.sh@379 -- # rm -f /home/vagrant/spdk_repo/spdk/spdk_initiator_config.json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:05:52.148 14:46:25 -- json_config/json_config.sh@380 -- # timing_exit json_config_test_fini 00:05:52.148 14:46:25 -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:52.148 14:46:25 -- common/autotest_common.sh@10 -- # set +x 00:05:52.148 14:46:25 -- json_config/json_config.sh@381 -- # return 0 00:05:52.148 INFO: Success 00:05:52.148 14:46:25 -- json_config/json_config.sh@459 -- # echo 'INFO: Success' 00:05:52.148 00:05:52.148 real 0m8.848s 00:05:52.148 user 0m12.125s 00:05:52.148 sys 0m2.121s 00:05:52.148 14:46:25 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:52.148 14:46:25 -- common/autotest_common.sh@10 -- # set +x 00:05:52.148 ************************************ 00:05:52.148 END TEST json_config 00:05:52.148 ************************************ 00:05:52.148 14:46:25 -- spdk/autotest.sh@166 -- # run_test json_config_extra_key /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:05:52.148 14:46:25 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:52.148 14:46:25 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:52.148 14:46:25 -- common/autotest_common.sh@10 -- # set +x 00:05:52.148 ************************************ 00:05:52.148 START TEST json_config_extra_key 00:05:52.148 ************************************ 00:05:52.148 14:46:25 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:05:52.148 14:46:25 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:05:52.149 14:46:25 -- common/autotest_common.sh@1690 -- # lcov --version 00:05:52.149 14:46:25 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:05:52.411 14:46:25 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:05:52.411 14:46:25 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:05:52.411 14:46:25 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:05:52.411 14:46:25 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:05:52.411 14:46:25 -- scripts/common.sh@335 -- # IFS=.-: 00:05:52.411 14:46:25 -- scripts/common.sh@335 -- # read -ra ver1 00:05:52.411 14:46:25 -- scripts/common.sh@336 -- # IFS=.-: 00:05:52.411 14:46:25 -- scripts/common.sh@336 -- # read -ra ver2 00:05:52.411 14:46:25 -- scripts/common.sh@337 -- # local 'op=<' 00:05:52.411 14:46:25 -- scripts/common.sh@339 -- # ver1_l=2 00:05:52.411 14:46:25 -- scripts/common.sh@340 -- # ver2_l=1 00:05:52.411 14:46:25 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:05:52.411 14:46:25 -- scripts/common.sh@343 -- # case "$op" in 00:05:52.411 14:46:25 -- scripts/common.sh@344 -- # : 1 00:05:52.411 14:46:25 -- scripts/common.sh@363 -- # (( v = 0 )) 00:05:52.411 14:46:25 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:52.411 14:46:25 -- scripts/common.sh@364 -- # decimal 1 00:05:52.411 14:46:25 -- scripts/common.sh@352 -- # local d=1 00:05:52.411 14:46:25 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:52.411 14:46:25 -- scripts/common.sh@354 -- # echo 1 00:05:52.411 14:46:25 -- scripts/common.sh@364 -- # ver1[v]=1 00:05:52.411 14:46:25 -- scripts/common.sh@365 -- # decimal 2 00:05:52.411 14:46:25 -- scripts/common.sh@352 -- # local d=2 00:05:52.411 14:46:25 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:52.411 14:46:25 -- scripts/common.sh@354 -- # echo 2 00:05:52.411 14:46:25 -- scripts/common.sh@365 -- # ver2[v]=2 00:05:52.411 14:46:25 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:05:52.411 14:46:25 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:05:52.411 14:46:25 -- scripts/common.sh@367 -- # return 0 00:05:52.411 14:46:25 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:52.411 14:46:25 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:05:52.411 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:52.411 --rc genhtml_branch_coverage=1 00:05:52.411 --rc genhtml_function_coverage=1 00:05:52.411 --rc genhtml_legend=1 00:05:52.411 --rc geninfo_all_blocks=1 00:05:52.411 --rc geninfo_unexecuted_blocks=1 00:05:52.411 00:05:52.411 ' 00:05:52.411 14:46:25 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:05:52.411 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:52.411 --rc genhtml_branch_coverage=1 00:05:52.411 --rc genhtml_function_coverage=1 00:05:52.411 --rc genhtml_legend=1 00:05:52.411 --rc geninfo_all_blocks=1 00:05:52.411 --rc geninfo_unexecuted_blocks=1 00:05:52.411 00:05:52.411 ' 00:05:52.411 14:46:25 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:05:52.411 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:52.411 --rc genhtml_branch_coverage=1 00:05:52.411 --rc genhtml_function_coverage=1 00:05:52.411 --rc genhtml_legend=1 00:05:52.411 --rc geninfo_all_blocks=1 00:05:52.411 --rc geninfo_unexecuted_blocks=1 00:05:52.411 00:05:52.411 ' 00:05:52.411 14:46:25 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:05:52.411 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:52.411 --rc genhtml_branch_coverage=1 00:05:52.411 --rc genhtml_function_coverage=1 00:05:52.411 --rc genhtml_legend=1 00:05:52.411 --rc geninfo_all_blocks=1 00:05:52.411 --rc geninfo_unexecuted_blocks=1 00:05:52.411 00:05:52.411 ' 00:05:52.411 14:46:25 -- json_config/json_config_extra_key.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:05:52.411 14:46:25 -- nvmf/common.sh@7 -- # uname -s 00:05:52.411 14:46:25 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:52.411 14:46:25 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:52.411 14:46:25 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:52.411 14:46:25 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:52.411 14:46:25 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:52.411 14:46:25 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:52.411 14:46:25 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:52.411 14:46:25 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:52.411 14:46:25 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:52.411 14:46:25 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:52.411 14:46:25 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:2d843004-a791-47f3-8dd7-3d04462c368b 00:05:52.411 14:46:25 -- nvmf/common.sh@18 -- # NVME_HOSTID=2d843004-a791-47f3-8dd7-3d04462c368b 00:05:52.411 14:46:25 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:52.411 14:46:25 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:52.411 14:46:25 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:52.411 14:46:25 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:05:52.411 14:46:25 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:52.411 14:46:25 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:52.411 14:46:25 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:52.411 14:46:25 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:52.411 14:46:25 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:52.411 14:46:25 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:52.411 14:46:25 -- paths/export.sh@5 -- # export PATH 00:05:52.411 14:46:25 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:52.411 14:46:25 -- nvmf/common.sh@46 -- # : 0 00:05:52.411 14:46:25 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:05:52.411 14:46:25 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:05:52.411 14:46:25 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:05:52.411 14:46:25 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:52.411 14:46:25 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:52.411 14:46:25 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:05:52.411 14:46:25 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:05:52.411 14:46:25 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:05:52.411 14:46:25 -- json_config/json_config_extra_key.sh@16 -- # app_pid=(['target']='') 00:05:52.411 14:46:25 -- json_config/json_config_extra_key.sh@16 -- # declare -A app_pid 00:05:52.411 14:46:25 -- json_config/json_config_extra_key.sh@17 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:05:52.411 14:46:25 -- json_config/json_config_extra_key.sh@17 -- # declare -A app_socket 00:05:52.411 14:46:25 -- json_config/json_config_extra_key.sh@18 -- # app_params=(['target']='-m 0x1 -s 1024') 00:05:52.411 14:46:25 -- json_config/json_config_extra_key.sh@18 -- # declare -A app_params 00:05:52.411 14:46:25 -- json_config/json_config_extra_key.sh@19 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json') 00:05:52.411 14:46:25 -- json_config/json_config_extra_key.sh@19 -- # declare -A configs_path 00:05:52.411 14:46:25 -- json_config/json_config_extra_key.sh@74 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:05:52.411 14:46:25 -- json_config/json_config_extra_key.sh@76 -- # echo 'INFO: launching applications...' 00:05:52.411 INFO: launching applications... 00:05:52.411 14:46:25 -- json_config/json_config_extra_key.sh@77 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:05:52.411 14:46:25 -- json_config/json_config_extra_key.sh@24 -- # local app=target 00:05:52.411 14:46:25 -- json_config/json_config_extra_key.sh@25 -- # shift 00:05:52.411 14:46:25 -- json_config/json_config_extra_key.sh@27 -- # [[ -n 22 ]] 00:05:52.412 14:46:25 -- json_config/json_config_extra_key.sh@28 -- # [[ -z '' ]] 00:05:52.412 14:46:25 -- json_config/json_config_extra_key.sh@31 -- # app_pid[$app]=68395 00:05:52.412 14:46:25 -- json_config/json_config_extra_key.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:05:52.412 14:46:25 -- json_config/json_config_extra_key.sh@33 -- # echo 'Waiting for target to run...' 00:05:52.412 Waiting for target to run... 00:05:52.412 14:46:25 -- json_config/json_config_extra_key.sh@34 -- # waitforlisten 68395 /var/tmp/spdk_tgt.sock 00:05:52.412 14:46:25 -- common/autotest_common.sh@829 -- # '[' -z 68395 ']' 00:05:52.412 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:52.412 14:46:25 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:52.412 14:46:25 -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:52.412 14:46:25 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:52.412 14:46:25 -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:52.412 14:46:25 -- common/autotest_common.sh@10 -- # set +x 00:05:52.412 [2024-12-01 14:46:25.396834] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:05:52.412 [2024-12-01 14:46:25.397543] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68395 ] 00:05:52.978 [2024-12-01 14:46:25.950519] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:52.978 [2024-12-01 14:46:26.015369] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:52.978 [2024-12-01 14:46:26.015496] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:53.546 00:05:53.546 INFO: shutting down applications... 00:05:53.546 14:46:26 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:53.546 14:46:26 -- common/autotest_common.sh@862 -- # return 0 00:05:53.546 14:46:26 -- json_config/json_config_extra_key.sh@35 -- # echo '' 00:05:53.546 14:46:26 -- json_config/json_config_extra_key.sh@79 -- # echo 'INFO: shutting down applications...' 00:05:53.546 14:46:26 -- json_config/json_config_extra_key.sh@80 -- # json_config_test_shutdown_app target 00:05:53.546 14:46:26 -- json_config/json_config_extra_key.sh@40 -- # local app=target 00:05:53.546 14:46:26 -- json_config/json_config_extra_key.sh@43 -- # [[ -n 22 ]] 00:05:53.546 14:46:26 -- json_config/json_config_extra_key.sh@44 -- # [[ -n 68395 ]] 00:05:53.546 14:46:26 -- json_config/json_config_extra_key.sh@47 -- # kill -SIGINT 68395 00:05:53.546 14:46:26 -- json_config/json_config_extra_key.sh@49 -- # (( i = 0 )) 00:05:53.546 14:46:26 -- json_config/json_config_extra_key.sh@49 -- # (( i < 30 )) 00:05:53.546 14:46:26 -- json_config/json_config_extra_key.sh@50 -- # kill -0 68395 00:05:53.546 14:46:26 -- json_config/json_config_extra_key.sh@54 -- # sleep 0.5 00:05:53.805 14:46:26 -- json_config/json_config_extra_key.sh@49 -- # (( i++ )) 00:05:54.064 14:46:26 -- json_config/json_config_extra_key.sh@49 -- # (( i < 30 )) 00:05:54.064 14:46:26 -- json_config/json_config_extra_key.sh@50 -- # kill -0 68395 00:05:54.064 14:46:26 -- json_config/json_config_extra_key.sh@51 -- # app_pid[$app]= 00:05:54.064 14:46:26 -- json_config/json_config_extra_key.sh@52 -- # break 00:05:54.064 14:46:26 -- json_config/json_config_extra_key.sh@57 -- # [[ -n '' ]] 00:05:54.064 14:46:26 -- json_config/json_config_extra_key.sh@62 -- # echo 'SPDK target shutdown done' 00:05:54.064 SPDK target shutdown done 00:05:54.064 14:46:26 -- json_config/json_config_extra_key.sh@82 -- # echo Success 00:05:54.064 Success 00:05:54.064 00:05:54.064 real 0m1.792s 00:05:54.064 user 0m1.493s 00:05:54.064 sys 0m0.609s 00:05:54.064 ************************************ 00:05:54.064 END TEST json_config_extra_key 00:05:54.064 ************************************ 00:05:54.064 14:46:26 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:54.064 14:46:26 -- common/autotest_common.sh@10 -- # set +x 00:05:54.064 14:46:26 -- spdk/autotest.sh@167 -- # run_test alias_rpc /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:05:54.064 14:46:26 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:54.064 14:46:26 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:54.064 14:46:26 -- common/autotest_common.sh@10 -- # set +x 00:05:54.064 ************************************ 00:05:54.064 START TEST alias_rpc 00:05:54.064 ************************************ 00:05:54.064 14:46:26 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:05:54.064 * Looking for test storage... 00:05:54.064 * Found test storage at /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc 00:05:54.064 14:46:27 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:05:54.064 14:46:27 -- common/autotest_common.sh@1690 -- # lcov --version 00:05:54.064 14:46:27 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:05:54.064 14:46:27 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:05:54.064 14:46:27 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:05:54.064 14:46:27 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:05:54.064 14:46:27 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:05:54.064 14:46:27 -- scripts/common.sh@335 -- # IFS=.-: 00:05:54.064 14:46:27 -- scripts/common.sh@335 -- # read -ra ver1 00:05:54.064 14:46:27 -- scripts/common.sh@336 -- # IFS=.-: 00:05:54.064 14:46:27 -- scripts/common.sh@336 -- # read -ra ver2 00:05:54.064 14:46:27 -- scripts/common.sh@337 -- # local 'op=<' 00:05:54.064 14:46:27 -- scripts/common.sh@339 -- # ver1_l=2 00:05:54.064 14:46:27 -- scripts/common.sh@340 -- # ver2_l=1 00:05:54.064 14:46:27 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:05:54.064 14:46:27 -- scripts/common.sh@343 -- # case "$op" in 00:05:54.064 14:46:27 -- scripts/common.sh@344 -- # : 1 00:05:54.064 14:46:27 -- scripts/common.sh@363 -- # (( v = 0 )) 00:05:54.064 14:46:27 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:54.064 14:46:27 -- scripts/common.sh@364 -- # decimal 1 00:05:54.064 14:46:27 -- scripts/common.sh@352 -- # local d=1 00:05:54.064 14:46:27 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:54.064 14:46:27 -- scripts/common.sh@354 -- # echo 1 00:05:54.064 14:46:27 -- scripts/common.sh@364 -- # ver1[v]=1 00:05:54.064 14:46:27 -- scripts/common.sh@365 -- # decimal 2 00:05:54.064 14:46:27 -- scripts/common.sh@352 -- # local d=2 00:05:54.064 14:46:27 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:54.064 14:46:27 -- scripts/common.sh@354 -- # echo 2 00:05:54.064 14:46:27 -- scripts/common.sh@365 -- # ver2[v]=2 00:05:54.064 14:46:27 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:05:54.064 14:46:27 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:05:54.064 14:46:27 -- scripts/common.sh@367 -- # return 0 00:05:54.064 14:46:27 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:54.064 14:46:27 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:05:54.064 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:54.064 --rc genhtml_branch_coverage=1 00:05:54.064 --rc genhtml_function_coverage=1 00:05:54.064 --rc genhtml_legend=1 00:05:54.064 --rc geninfo_all_blocks=1 00:05:54.064 --rc geninfo_unexecuted_blocks=1 00:05:54.064 00:05:54.064 ' 00:05:54.064 14:46:27 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:05:54.064 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:54.064 --rc genhtml_branch_coverage=1 00:05:54.064 --rc genhtml_function_coverage=1 00:05:54.064 --rc genhtml_legend=1 00:05:54.064 --rc geninfo_all_blocks=1 00:05:54.064 --rc geninfo_unexecuted_blocks=1 00:05:54.064 00:05:54.064 ' 00:05:54.064 14:46:27 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:05:54.064 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:54.064 --rc genhtml_branch_coverage=1 00:05:54.064 --rc genhtml_function_coverage=1 00:05:54.064 --rc genhtml_legend=1 00:05:54.064 --rc geninfo_all_blocks=1 00:05:54.064 --rc geninfo_unexecuted_blocks=1 00:05:54.064 00:05:54.064 ' 00:05:54.064 14:46:27 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:05:54.064 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:54.064 --rc genhtml_branch_coverage=1 00:05:54.064 --rc genhtml_function_coverage=1 00:05:54.064 --rc genhtml_legend=1 00:05:54.064 --rc geninfo_all_blocks=1 00:05:54.064 --rc geninfo_unexecuted_blocks=1 00:05:54.064 00:05:54.064 ' 00:05:54.064 14:46:27 -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:05:54.064 14:46:27 -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=68484 00:05:54.065 14:46:27 -- alias_rpc/alias_rpc.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:54.065 14:46:27 -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 68484 00:05:54.065 14:46:27 -- common/autotest_common.sh@829 -- # '[' -z 68484 ']' 00:05:54.065 14:46:27 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:54.324 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:54.324 14:46:27 -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:54.324 14:46:27 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:54.324 14:46:27 -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:54.324 14:46:27 -- common/autotest_common.sh@10 -- # set +x 00:05:54.324 [2024-12-01 14:46:27.225447] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:05:54.324 [2024-12-01 14:46:27.225667] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68484 ] 00:05:54.324 [2024-12-01 14:46:27.358065] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:54.324 [2024-12-01 14:46:27.412496] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:54.324 [2024-12-01 14:46:27.412838] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:55.260 14:46:28 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:55.260 14:46:28 -- common/autotest_common.sh@862 -- # return 0 00:05:55.260 14:46:28 -- alias_rpc/alias_rpc.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config -i 00:05:55.519 14:46:28 -- alias_rpc/alias_rpc.sh@19 -- # killprocess 68484 00:05:55.519 14:46:28 -- common/autotest_common.sh@936 -- # '[' -z 68484 ']' 00:05:55.519 14:46:28 -- common/autotest_common.sh@940 -- # kill -0 68484 00:05:55.519 14:46:28 -- common/autotest_common.sh@941 -- # uname 00:05:55.519 14:46:28 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:05:55.519 14:46:28 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 68484 00:05:55.519 14:46:28 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:05:55.520 killing process with pid 68484 00:05:55.520 14:46:28 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:05:55.520 14:46:28 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 68484' 00:05:55.520 14:46:28 -- common/autotest_common.sh@955 -- # kill 68484 00:05:55.520 14:46:28 -- common/autotest_common.sh@960 -- # wait 68484 00:05:55.778 ************************************ 00:05:55.778 END TEST alias_rpc 00:05:55.778 ************************************ 00:05:55.779 00:05:55.779 real 0m1.862s 00:05:55.779 user 0m2.111s 00:05:55.779 sys 0m0.451s 00:05:55.779 14:46:28 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:55.779 14:46:28 -- common/autotest_common.sh@10 -- # set +x 00:05:55.779 14:46:28 -- spdk/autotest.sh@169 -- # [[ 1 -eq 0 ]] 00:05:55.779 14:46:28 -- spdk/autotest.sh@173 -- # run_test dpdk_mem_utility /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:05:55.779 14:46:28 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:55.779 14:46:28 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:55.779 14:46:28 -- common/autotest_common.sh@10 -- # set +x 00:05:56.037 ************************************ 00:05:56.037 START TEST dpdk_mem_utility 00:05:56.037 ************************************ 00:05:56.037 14:46:28 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:05:56.037 * Looking for test storage... 00:05:56.037 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility 00:05:56.037 14:46:28 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:05:56.037 14:46:28 -- common/autotest_common.sh@1690 -- # lcov --version 00:05:56.037 14:46:28 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:05:56.037 14:46:29 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:05:56.037 14:46:29 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:05:56.037 14:46:29 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:05:56.037 14:46:29 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:05:56.038 14:46:29 -- scripts/common.sh@335 -- # IFS=.-: 00:05:56.038 14:46:29 -- scripts/common.sh@335 -- # read -ra ver1 00:05:56.038 14:46:29 -- scripts/common.sh@336 -- # IFS=.-: 00:05:56.038 14:46:29 -- scripts/common.sh@336 -- # read -ra ver2 00:05:56.038 14:46:29 -- scripts/common.sh@337 -- # local 'op=<' 00:05:56.038 14:46:29 -- scripts/common.sh@339 -- # ver1_l=2 00:05:56.038 14:46:29 -- scripts/common.sh@340 -- # ver2_l=1 00:05:56.038 14:46:29 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:05:56.038 14:46:29 -- scripts/common.sh@343 -- # case "$op" in 00:05:56.038 14:46:29 -- scripts/common.sh@344 -- # : 1 00:05:56.038 14:46:29 -- scripts/common.sh@363 -- # (( v = 0 )) 00:05:56.038 14:46:29 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:56.038 14:46:29 -- scripts/common.sh@364 -- # decimal 1 00:05:56.038 14:46:29 -- scripts/common.sh@352 -- # local d=1 00:05:56.038 14:46:29 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:56.038 14:46:29 -- scripts/common.sh@354 -- # echo 1 00:05:56.038 14:46:29 -- scripts/common.sh@364 -- # ver1[v]=1 00:05:56.038 14:46:29 -- scripts/common.sh@365 -- # decimal 2 00:05:56.038 14:46:29 -- scripts/common.sh@352 -- # local d=2 00:05:56.038 14:46:29 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:56.038 14:46:29 -- scripts/common.sh@354 -- # echo 2 00:05:56.038 14:46:29 -- scripts/common.sh@365 -- # ver2[v]=2 00:05:56.038 14:46:29 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:05:56.038 14:46:29 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:05:56.038 14:46:29 -- scripts/common.sh@367 -- # return 0 00:05:56.038 14:46:29 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:56.038 14:46:29 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:05:56.038 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:56.038 --rc genhtml_branch_coverage=1 00:05:56.038 --rc genhtml_function_coverage=1 00:05:56.038 --rc genhtml_legend=1 00:05:56.038 --rc geninfo_all_blocks=1 00:05:56.038 --rc geninfo_unexecuted_blocks=1 00:05:56.038 00:05:56.038 ' 00:05:56.038 14:46:29 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:05:56.038 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:56.038 --rc genhtml_branch_coverage=1 00:05:56.038 --rc genhtml_function_coverage=1 00:05:56.038 --rc genhtml_legend=1 00:05:56.038 --rc geninfo_all_blocks=1 00:05:56.038 --rc geninfo_unexecuted_blocks=1 00:05:56.038 00:05:56.038 ' 00:05:56.038 14:46:29 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:05:56.038 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:56.038 --rc genhtml_branch_coverage=1 00:05:56.038 --rc genhtml_function_coverage=1 00:05:56.038 --rc genhtml_legend=1 00:05:56.038 --rc geninfo_all_blocks=1 00:05:56.038 --rc geninfo_unexecuted_blocks=1 00:05:56.038 00:05:56.038 ' 00:05:56.038 14:46:29 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:05:56.038 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:56.038 --rc genhtml_branch_coverage=1 00:05:56.038 --rc genhtml_function_coverage=1 00:05:56.038 --rc genhtml_legend=1 00:05:56.038 --rc geninfo_all_blocks=1 00:05:56.038 --rc geninfo_unexecuted_blocks=1 00:05:56.038 00:05:56.038 ' 00:05:56.038 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:56.038 14:46:29 -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:05:56.038 14:46:29 -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=68583 00:05:56.038 14:46:29 -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 68583 00:05:56.038 14:46:29 -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:56.038 14:46:29 -- common/autotest_common.sh@829 -- # '[' -z 68583 ']' 00:05:56.038 14:46:29 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:56.038 14:46:29 -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:56.038 14:46:29 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:56.038 14:46:29 -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:56.038 14:46:29 -- common/autotest_common.sh@10 -- # set +x 00:05:56.038 [2024-12-01 14:46:29.145423] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:05:56.038 [2024-12-01 14:46:29.145728] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68583 ] 00:05:56.297 [2024-12-01 14:46:29.283143] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:56.297 [2024-12-01 14:46:29.337337] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:56.297 [2024-12-01 14:46:29.337478] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:57.235 14:46:30 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:57.235 14:46:30 -- common/autotest_common.sh@862 -- # return 0 00:05:57.235 14:46:30 -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:05:57.235 14:46:30 -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:05:57.235 14:46:30 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:57.235 14:46:30 -- common/autotest_common.sh@10 -- # set +x 00:05:57.235 { 00:05:57.235 "filename": "/tmp/spdk_mem_dump.txt" 00:05:57.235 } 00:05:57.235 14:46:30 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:57.235 14:46:30 -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:05:57.235 DPDK memory size 814.000000 MiB in 1 heap(s) 00:05:57.235 1 heaps totaling size 814.000000 MiB 00:05:57.235 size: 814.000000 MiB heap id: 0 00:05:57.235 end heaps---------- 00:05:57.235 8 mempools totaling size 598.116089 MiB 00:05:57.235 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:05:57.235 size: 158.602051 MiB name: PDU_data_out_Pool 00:05:57.235 size: 84.521057 MiB name: bdev_io_68583 00:05:57.235 size: 51.011292 MiB name: evtpool_68583 00:05:57.235 size: 50.003479 MiB name: msgpool_68583 00:05:57.236 size: 21.763794 MiB name: PDU_Pool 00:05:57.236 size: 19.513306 MiB name: SCSI_TASK_Pool 00:05:57.236 size: 0.026123 MiB name: Session_Pool 00:05:57.236 end mempools------- 00:05:57.236 6 memzones totaling size 4.142822 MiB 00:05:57.236 size: 1.000366 MiB name: RG_ring_0_68583 00:05:57.236 size: 1.000366 MiB name: RG_ring_1_68583 00:05:57.236 size: 1.000366 MiB name: RG_ring_4_68583 00:05:57.236 size: 1.000366 MiB name: RG_ring_5_68583 00:05:57.236 size: 0.125366 MiB name: RG_ring_2_68583 00:05:57.236 size: 0.015991 MiB name: RG_ring_3_68583 00:05:57.236 end memzones------- 00:05:57.236 14:46:30 -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py -m 0 00:05:57.236 heap id: 0 total size: 814.000000 MiB number of busy elements: 214 number of free elements: 15 00:05:57.236 list of free elements. size: 12.487671 MiB 00:05:57.236 element at address: 0x200000400000 with size: 1.999512 MiB 00:05:57.236 element at address: 0x200018e00000 with size: 0.999878 MiB 00:05:57.236 element at address: 0x200019000000 with size: 0.999878 MiB 00:05:57.236 element at address: 0x200003e00000 with size: 0.996277 MiB 00:05:57.236 element at address: 0x200031c00000 with size: 0.994446 MiB 00:05:57.236 element at address: 0x200013800000 with size: 0.978699 MiB 00:05:57.236 element at address: 0x200007000000 with size: 0.959839 MiB 00:05:57.236 element at address: 0x200019200000 with size: 0.936584 MiB 00:05:57.236 element at address: 0x200000200000 with size: 0.837219 MiB 00:05:57.236 element at address: 0x20001aa00000 with size: 0.572632 MiB 00:05:57.236 element at address: 0x20000b200000 with size: 0.489990 MiB 00:05:57.236 element at address: 0x200000800000 with size: 0.487061 MiB 00:05:57.236 element at address: 0x200019400000 with size: 0.485657 MiB 00:05:57.236 element at address: 0x200027e00000 with size: 0.398315 MiB 00:05:57.236 element at address: 0x200003a00000 with size: 0.351685 MiB 00:05:57.236 list of standard malloc elements. size: 199.249756 MiB 00:05:57.236 element at address: 0x20000b3fff80 with size: 132.000122 MiB 00:05:57.236 element at address: 0x2000071fff80 with size: 64.000122 MiB 00:05:57.236 element at address: 0x200018efff80 with size: 1.000122 MiB 00:05:57.236 element at address: 0x2000190fff80 with size: 1.000122 MiB 00:05:57.236 element at address: 0x2000192fff80 with size: 1.000122 MiB 00:05:57.236 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:05:57.236 element at address: 0x2000192eff00 with size: 0.062622 MiB 00:05:57.236 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:05:57.236 element at address: 0x2000192efdc0 with size: 0.000305 MiB 00:05:57.236 element at address: 0x2000002d6540 with size: 0.000183 MiB 00:05:57.236 element at address: 0x2000002d6600 with size: 0.000183 MiB 00:05:57.236 element at address: 0x2000002d66c0 with size: 0.000183 MiB 00:05:57.236 element at address: 0x2000002d6780 with size: 0.000183 MiB 00:05:57.236 element at address: 0x2000002d6840 with size: 0.000183 MiB 00:05:57.236 element at address: 0x2000002d6900 with size: 0.000183 MiB 00:05:57.236 element at address: 0x2000002d69c0 with size: 0.000183 MiB 00:05:57.236 element at address: 0x2000002d6a80 with size: 0.000183 MiB 00:05:57.236 element at address: 0x2000002d6b40 with size: 0.000183 MiB 00:05:57.236 element at address: 0x2000002d6c00 with size: 0.000183 MiB 00:05:57.236 element at address: 0x2000002d6cc0 with size: 0.000183 MiB 00:05:57.236 element at address: 0x2000002d6d80 with size: 0.000183 MiB 00:05:57.236 element at address: 0x2000002d6e40 with size: 0.000183 MiB 00:05:57.236 element at address: 0x2000002d6f00 with size: 0.000183 MiB 00:05:57.236 element at address: 0x2000002d6fc0 with size: 0.000183 MiB 00:05:57.236 element at address: 0x2000002d71c0 with size: 0.000183 MiB 00:05:57.236 element at address: 0x2000002d7280 with size: 0.000183 MiB 00:05:57.236 element at address: 0x2000002d7340 with size: 0.000183 MiB 00:05:57.236 element at address: 0x2000002d7400 with size: 0.000183 MiB 00:05:57.236 element at address: 0x2000002d74c0 with size: 0.000183 MiB 00:05:57.236 element at address: 0x2000002d7580 with size: 0.000183 MiB 00:05:57.236 element at address: 0x2000002d7640 with size: 0.000183 MiB 00:05:57.236 element at address: 0x2000002d7700 with size: 0.000183 MiB 00:05:57.236 element at address: 0x2000002d77c0 with size: 0.000183 MiB 00:05:57.236 element at address: 0x2000002d7880 with size: 0.000183 MiB 00:05:57.236 element at address: 0x2000002d7940 with size: 0.000183 MiB 00:05:57.236 element at address: 0x2000002d7a00 with size: 0.000183 MiB 00:05:57.236 element at address: 0x2000002d7ac0 with size: 0.000183 MiB 00:05:57.236 element at address: 0x2000002d7b80 with size: 0.000183 MiB 00:05:57.236 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:05:57.236 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:05:57.236 element at address: 0x20000087cb00 with size: 0.000183 MiB 00:05:57.236 element at address: 0x20000087cbc0 with size: 0.000183 MiB 00:05:57.236 element at address: 0x20000087cc80 with size: 0.000183 MiB 00:05:57.236 element at address: 0x20000087cd40 with size: 0.000183 MiB 00:05:57.236 element at address: 0x20000087ce00 with size: 0.000183 MiB 00:05:57.236 element at address: 0x20000087cec0 with size: 0.000183 MiB 00:05:57.236 element at address: 0x2000008fd180 with size: 0.000183 MiB 00:05:57.236 element at address: 0x200003a5a080 with size: 0.000183 MiB 00:05:57.236 element at address: 0x200003a5a140 with size: 0.000183 MiB 00:05:57.236 element at address: 0x200003a5a200 with size: 0.000183 MiB 00:05:57.236 element at address: 0x200003a5a2c0 with size: 0.000183 MiB 00:05:57.236 element at address: 0x200003a5a380 with size: 0.000183 MiB 00:05:57.236 element at address: 0x200003a5a440 with size: 0.000183 MiB 00:05:57.236 element at address: 0x200003a5a500 with size: 0.000183 MiB 00:05:57.236 element at address: 0x200003a5a5c0 with size: 0.000183 MiB 00:05:57.236 element at address: 0x200003a5a680 with size: 0.000183 MiB 00:05:57.236 element at address: 0x200003a5a740 with size: 0.000183 MiB 00:05:57.236 element at address: 0x200003a5a800 with size: 0.000183 MiB 00:05:57.236 element at address: 0x200003a5a8c0 with size: 0.000183 MiB 00:05:57.236 element at address: 0x200003a5a980 with size: 0.000183 MiB 00:05:57.236 element at address: 0x200003a5aa40 with size: 0.000183 MiB 00:05:57.236 element at address: 0x200003a5ab00 with size: 0.000183 MiB 00:05:57.236 element at address: 0x200003a5abc0 with size: 0.000183 MiB 00:05:57.236 element at address: 0x200003a5ac80 with size: 0.000183 MiB 00:05:57.236 element at address: 0x200003a5ad40 with size: 0.000183 MiB 00:05:57.236 element at address: 0x200003a5ae00 with size: 0.000183 MiB 00:05:57.236 element at address: 0x200003a5aec0 with size: 0.000183 MiB 00:05:57.236 element at address: 0x200003a5af80 with size: 0.000183 MiB 00:05:57.236 element at address: 0x200003a5b040 with size: 0.000183 MiB 00:05:57.236 element at address: 0x200003adb300 with size: 0.000183 MiB 00:05:57.236 element at address: 0x200003adb500 with size: 0.000183 MiB 00:05:57.236 element at address: 0x200003adf7c0 with size: 0.000183 MiB 00:05:57.236 element at address: 0x200003affa80 with size: 0.000183 MiB 00:05:57.236 element at address: 0x200003affb40 with size: 0.000183 MiB 00:05:57.236 element at address: 0x200003eff0c0 with size: 0.000183 MiB 00:05:57.236 element at address: 0x2000070fdd80 with size: 0.000183 MiB 00:05:57.236 element at address: 0x20000b27d700 with size: 0.000183 MiB 00:05:57.236 element at address: 0x20000b27d7c0 with size: 0.000183 MiB 00:05:57.236 element at address: 0x20000b27d880 with size: 0.000183 MiB 00:05:57.236 element at address: 0x20000b27d940 with size: 0.000183 MiB 00:05:57.236 element at address: 0x20000b27da00 with size: 0.000183 MiB 00:05:57.237 element at address: 0x20000b27dac0 with size: 0.000183 MiB 00:05:57.237 element at address: 0x20000b2fdd80 with size: 0.000183 MiB 00:05:57.237 element at address: 0x2000138fa8c0 with size: 0.000183 MiB 00:05:57.237 element at address: 0x2000192efc40 with size: 0.000183 MiB 00:05:57.237 element at address: 0x2000192efd00 with size: 0.000183 MiB 00:05:57.237 element at address: 0x2000194bc740 with size: 0.000183 MiB 00:05:57.237 element at address: 0x20001aa92980 with size: 0.000183 MiB 00:05:57.237 element at address: 0x20001aa92a40 with size: 0.000183 MiB 00:05:57.237 element at address: 0x20001aa92b00 with size: 0.000183 MiB 00:05:57.237 element at address: 0x20001aa92bc0 with size: 0.000183 MiB 00:05:57.237 element at address: 0x20001aa92c80 with size: 0.000183 MiB 00:05:57.237 element at address: 0x20001aa92d40 with size: 0.000183 MiB 00:05:57.237 element at address: 0x20001aa92e00 with size: 0.000183 MiB 00:05:57.237 element at address: 0x20001aa92ec0 with size: 0.000183 MiB 00:05:57.237 element at address: 0x20001aa92f80 with size: 0.000183 MiB 00:05:57.237 element at address: 0x20001aa93040 with size: 0.000183 MiB 00:05:57.237 element at address: 0x20001aa93100 with size: 0.000183 MiB 00:05:57.237 element at address: 0x20001aa931c0 with size: 0.000183 MiB 00:05:57.237 element at address: 0x20001aa93280 with size: 0.000183 MiB 00:05:57.237 element at address: 0x20001aa93340 with size: 0.000183 MiB 00:05:57.237 element at address: 0x20001aa93400 with size: 0.000183 MiB 00:05:57.237 element at address: 0x20001aa934c0 with size: 0.000183 MiB 00:05:57.237 element at address: 0x20001aa93580 with size: 0.000183 MiB 00:05:57.237 element at address: 0x20001aa93640 with size: 0.000183 MiB 00:05:57.237 element at address: 0x20001aa93700 with size: 0.000183 MiB 00:05:57.237 element at address: 0x20001aa937c0 with size: 0.000183 MiB 00:05:57.237 element at address: 0x20001aa93880 with size: 0.000183 MiB 00:05:57.237 element at address: 0x20001aa93940 with size: 0.000183 MiB 00:05:57.237 element at address: 0x20001aa93a00 with size: 0.000183 MiB 00:05:57.237 element at address: 0x20001aa93ac0 with size: 0.000183 MiB 00:05:57.237 element at address: 0x20001aa93b80 with size: 0.000183 MiB 00:05:57.237 element at address: 0x20001aa93c40 with size: 0.000183 MiB 00:05:57.237 element at address: 0x20001aa93d00 with size: 0.000183 MiB 00:05:57.237 element at address: 0x20001aa93dc0 with size: 0.000183 MiB 00:05:57.237 element at address: 0x20001aa93e80 with size: 0.000183 MiB 00:05:57.237 element at address: 0x20001aa93f40 with size: 0.000183 MiB 00:05:57.237 element at address: 0x20001aa94000 with size: 0.000183 MiB 00:05:57.237 element at address: 0x20001aa940c0 with size: 0.000183 MiB 00:05:57.237 element at address: 0x20001aa94180 with size: 0.000183 MiB 00:05:57.237 element at address: 0x20001aa94240 with size: 0.000183 MiB 00:05:57.237 element at address: 0x20001aa94300 with size: 0.000183 MiB 00:05:57.237 element at address: 0x20001aa943c0 with size: 0.000183 MiB 00:05:57.237 element at address: 0x20001aa94480 with size: 0.000183 MiB 00:05:57.237 element at address: 0x20001aa94540 with size: 0.000183 MiB 00:05:57.237 element at address: 0x20001aa94600 with size: 0.000183 MiB 00:05:57.237 element at address: 0x20001aa946c0 with size: 0.000183 MiB 00:05:57.237 element at address: 0x20001aa94780 with size: 0.000183 MiB 00:05:57.237 element at address: 0x20001aa94840 with size: 0.000183 MiB 00:05:57.237 element at address: 0x20001aa94900 with size: 0.000183 MiB 00:05:57.237 element at address: 0x20001aa949c0 with size: 0.000183 MiB 00:05:57.237 element at address: 0x20001aa94a80 with size: 0.000183 MiB 00:05:57.237 element at address: 0x20001aa94b40 with size: 0.000183 MiB 00:05:57.237 element at address: 0x20001aa94c00 with size: 0.000183 MiB 00:05:57.237 element at address: 0x20001aa94cc0 with size: 0.000183 MiB 00:05:57.237 element at address: 0x20001aa94d80 with size: 0.000183 MiB 00:05:57.237 element at address: 0x20001aa94e40 with size: 0.000183 MiB 00:05:57.237 element at address: 0x20001aa94f00 with size: 0.000183 MiB 00:05:57.237 element at address: 0x20001aa94fc0 with size: 0.000183 MiB 00:05:57.237 element at address: 0x20001aa95080 with size: 0.000183 MiB 00:05:57.237 element at address: 0x20001aa95140 with size: 0.000183 MiB 00:05:57.237 element at address: 0x20001aa95200 with size: 0.000183 MiB 00:05:57.237 element at address: 0x20001aa952c0 with size: 0.000183 MiB 00:05:57.237 element at address: 0x20001aa95380 with size: 0.000183 MiB 00:05:57.237 element at address: 0x20001aa95440 with size: 0.000183 MiB 00:05:57.237 element at address: 0x200027e65f80 with size: 0.000183 MiB 00:05:57.237 element at address: 0x200027e66040 with size: 0.000183 MiB 00:05:57.237 element at address: 0x200027e6cc40 with size: 0.000183 MiB 00:05:57.237 element at address: 0x200027e6ce40 with size: 0.000183 MiB 00:05:57.237 element at address: 0x200027e6cf00 with size: 0.000183 MiB 00:05:57.237 element at address: 0x200027e6cfc0 with size: 0.000183 MiB 00:05:57.237 element at address: 0x200027e6d080 with size: 0.000183 MiB 00:05:57.237 element at address: 0x200027e6d140 with size: 0.000183 MiB 00:05:57.237 element at address: 0x200027e6d200 with size: 0.000183 MiB 00:05:57.237 element at address: 0x200027e6d2c0 with size: 0.000183 MiB 00:05:57.237 element at address: 0x200027e6d380 with size: 0.000183 MiB 00:05:57.237 element at address: 0x200027e6d440 with size: 0.000183 MiB 00:05:57.237 element at address: 0x200027e6d500 with size: 0.000183 MiB 00:05:57.237 element at address: 0x200027e6d5c0 with size: 0.000183 MiB 00:05:57.237 element at address: 0x200027e6d680 with size: 0.000183 MiB 00:05:57.237 element at address: 0x200027e6d740 with size: 0.000183 MiB 00:05:57.237 element at address: 0x200027e6d800 with size: 0.000183 MiB 00:05:57.237 element at address: 0x200027e6d8c0 with size: 0.000183 MiB 00:05:57.237 element at address: 0x200027e6d980 with size: 0.000183 MiB 00:05:57.237 element at address: 0x200027e6da40 with size: 0.000183 MiB 00:05:57.237 element at address: 0x200027e6db00 with size: 0.000183 MiB 00:05:57.237 element at address: 0x200027e6dbc0 with size: 0.000183 MiB 00:05:57.237 element at address: 0x200027e6dc80 with size: 0.000183 MiB 00:05:57.237 element at address: 0x200027e6dd40 with size: 0.000183 MiB 00:05:57.237 element at address: 0x200027e6de00 with size: 0.000183 MiB 00:05:57.237 element at address: 0x200027e6dec0 with size: 0.000183 MiB 00:05:57.237 element at address: 0x200027e6df80 with size: 0.000183 MiB 00:05:57.237 element at address: 0x200027e6e040 with size: 0.000183 MiB 00:05:57.237 element at address: 0x200027e6e100 with size: 0.000183 MiB 00:05:57.237 element at address: 0x200027e6e1c0 with size: 0.000183 MiB 00:05:57.237 element at address: 0x200027e6e280 with size: 0.000183 MiB 00:05:57.237 element at address: 0x200027e6e340 with size: 0.000183 MiB 00:05:57.237 element at address: 0x200027e6e400 with size: 0.000183 MiB 00:05:57.237 element at address: 0x200027e6e4c0 with size: 0.000183 MiB 00:05:57.237 element at address: 0x200027e6e580 with size: 0.000183 MiB 00:05:57.237 element at address: 0x200027e6e640 with size: 0.000183 MiB 00:05:57.237 element at address: 0x200027e6e700 with size: 0.000183 MiB 00:05:57.237 element at address: 0x200027e6e7c0 with size: 0.000183 MiB 00:05:57.237 element at address: 0x200027e6e880 with size: 0.000183 MiB 00:05:57.237 element at address: 0x200027e6e940 with size: 0.000183 MiB 00:05:57.237 element at address: 0x200027e6ea00 with size: 0.000183 MiB 00:05:57.237 element at address: 0x200027e6eac0 with size: 0.000183 MiB 00:05:57.237 element at address: 0x200027e6eb80 with size: 0.000183 MiB 00:05:57.237 element at address: 0x200027e6ec40 with size: 0.000183 MiB 00:05:57.237 element at address: 0x200027e6ed00 with size: 0.000183 MiB 00:05:57.237 element at address: 0x200027e6edc0 with size: 0.000183 MiB 00:05:57.237 element at address: 0x200027e6ee80 with size: 0.000183 MiB 00:05:57.237 element at address: 0x200027e6ef40 with size: 0.000183 MiB 00:05:57.237 element at address: 0x200027e6f000 with size: 0.000183 MiB 00:05:57.237 element at address: 0x200027e6f0c0 with size: 0.000183 MiB 00:05:57.237 element at address: 0x200027e6f180 with size: 0.000183 MiB 00:05:57.237 element at address: 0x200027e6f240 with size: 0.000183 MiB 00:05:57.237 element at address: 0x200027e6f300 with size: 0.000183 MiB 00:05:57.237 element at address: 0x200027e6f3c0 with size: 0.000183 MiB 00:05:57.237 element at address: 0x200027e6f480 with size: 0.000183 MiB 00:05:57.237 element at address: 0x200027e6f540 with size: 0.000183 MiB 00:05:57.237 element at address: 0x200027e6f600 with size: 0.000183 MiB 00:05:57.238 element at address: 0x200027e6f6c0 with size: 0.000183 MiB 00:05:57.238 element at address: 0x200027e6f780 with size: 0.000183 MiB 00:05:57.238 element at address: 0x200027e6f840 with size: 0.000183 MiB 00:05:57.238 element at address: 0x200027e6f900 with size: 0.000183 MiB 00:05:57.238 element at address: 0x200027e6f9c0 with size: 0.000183 MiB 00:05:57.238 element at address: 0x200027e6fa80 with size: 0.000183 MiB 00:05:57.238 element at address: 0x200027e6fb40 with size: 0.000183 MiB 00:05:57.238 element at address: 0x200027e6fc00 with size: 0.000183 MiB 00:05:57.238 element at address: 0x200027e6fcc0 with size: 0.000183 MiB 00:05:57.238 element at address: 0x200027e6fd80 with size: 0.000183 MiB 00:05:57.238 element at address: 0x200027e6fe40 with size: 0.000183 MiB 00:05:57.238 element at address: 0x200027e6ff00 with size: 0.000183 MiB 00:05:57.238 list of memzone associated elements. size: 602.262573 MiB 00:05:57.238 element at address: 0x20001aa95500 with size: 211.416748 MiB 00:05:57.238 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:05:57.238 element at address: 0x200027e6ffc0 with size: 157.562561 MiB 00:05:57.238 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:05:57.238 element at address: 0x2000139fab80 with size: 84.020630 MiB 00:05:57.238 associated memzone info: size: 84.020508 MiB name: MP_bdev_io_68583_0 00:05:57.238 element at address: 0x2000009ff380 with size: 48.003052 MiB 00:05:57.238 associated memzone info: size: 48.002930 MiB name: MP_evtpool_68583_0 00:05:57.238 element at address: 0x200003fff380 with size: 48.003052 MiB 00:05:57.238 associated memzone info: size: 48.002930 MiB name: MP_msgpool_68583_0 00:05:57.238 element at address: 0x2000195be940 with size: 20.255554 MiB 00:05:57.238 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:05:57.238 element at address: 0x200031dfeb40 with size: 18.005066 MiB 00:05:57.238 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:05:57.238 element at address: 0x2000005ffe00 with size: 2.000488 MiB 00:05:57.238 associated memzone info: size: 2.000366 MiB name: RG_MP_evtpool_68583 00:05:57.238 element at address: 0x200003bffe00 with size: 2.000488 MiB 00:05:57.238 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_68583 00:05:57.238 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:05:57.238 associated memzone info: size: 1.007996 MiB name: MP_evtpool_68583 00:05:57.238 element at address: 0x20000b2fde40 with size: 1.008118 MiB 00:05:57.238 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:05:57.238 element at address: 0x2000194bc800 with size: 1.008118 MiB 00:05:57.238 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:05:57.238 element at address: 0x2000070fde40 with size: 1.008118 MiB 00:05:57.238 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:05:57.238 element at address: 0x2000008fd240 with size: 1.008118 MiB 00:05:57.238 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:05:57.238 element at address: 0x200003eff180 with size: 1.000488 MiB 00:05:57.238 associated memzone info: size: 1.000366 MiB name: RG_ring_0_68583 00:05:57.238 element at address: 0x200003affc00 with size: 1.000488 MiB 00:05:57.238 associated memzone info: size: 1.000366 MiB name: RG_ring_1_68583 00:05:57.238 element at address: 0x2000138fa980 with size: 1.000488 MiB 00:05:57.238 associated memzone info: size: 1.000366 MiB name: RG_ring_4_68583 00:05:57.238 element at address: 0x200031cfe940 with size: 1.000488 MiB 00:05:57.238 associated memzone info: size: 1.000366 MiB name: RG_ring_5_68583 00:05:57.238 element at address: 0x200003a5b100 with size: 0.500488 MiB 00:05:57.238 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_68583 00:05:57.238 element at address: 0x20000b27db80 with size: 0.500488 MiB 00:05:57.238 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:05:57.238 element at address: 0x20000087cf80 with size: 0.500488 MiB 00:05:57.238 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:05:57.238 element at address: 0x20001947c540 with size: 0.250488 MiB 00:05:57.238 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:05:57.238 element at address: 0x200003adf880 with size: 0.125488 MiB 00:05:57.238 associated memzone info: size: 0.125366 MiB name: RG_ring_2_68583 00:05:57.238 element at address: 0x2000070f5b80 with size: 0.031738 MiB 00:05:57.238 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:05:57.238 element at address: 0x200027e66100 with size: 0.023743 MiB 00:05:57.238 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:05:57.238 element at address: 0x200003adb5c0 with size: 0.016113 MiB 00:05:57.238 associated memzone info: size: 0.015991 MiB name: RG_ring_3_68583 00:05:57.238 element at address: 0x200027e6c240 with size: 0.002441 MiB 00:05:57.238 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:05:57.238 element at address: 0x2000002d7080 with size: 0.000305 MiB 00:05:57.238 associated memzone info: size: 0.000183 MiB name: MP_msgpool_68583 00:05:57.238 element at address: 0x200003adb3c0 with size: 0.000305 MiB 00:05:57.238 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_68583 00:05:57.238 element at address: 0x200027e6cd00 with size: 0.000305 MiB 00:05:57.238 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:05:57.238 14:46:30 -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:05:57.238 14:46:30 -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 68583 00:05:57.238 14:46:30 -- common/autotest_common.sh@936 -- # '[' -z 68583 ']' 00:05:57.238 14:46:30 -- common/autotest_common.sh@940 -- # kill -0 68583 00:05:57.238 14:46:30 -- common/autotest_common.sh@941 -- # uname 00:05:57.238 14:46:30 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:05:57.238 14:46:30 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 68583 00:05:57.238 killing process with pid 68583 00:05:57.238 14:46:30 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:05:57.238 14:46:30 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:05:57.238 14:46:30 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 68583' 00:05:57.238 14:46:30 -- common/autotest_common.sh@955 -- # kill 68583 00:05:57.238 14:46:30 -- common/autotest_common.sh@960 -- # wait 68583 00:05:57.807 00:05:57.807 real 0m1.770s 00:05:57.807 user 0m1.933s 00:05:57.807 sys 0m0.454s 00:05:57.807 14:46:30 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:57.807 ************************************ 00:05:57.807 END TEST dpdk_mem_utility 00:05:57.807 14:46:30 -- common/autotest_common.sh@10 -- # set +x 00:05:57.807 ************************************ 00:05:57.807 14:46:30 -- spdk/autotest.sh@174 -- # run_test event /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:05:57.807 14:46:30 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:57.807 14:46:30 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:57.807 14:46:30 -- common/autotest_common.sh@10 -- # set +x 00:05:57.807 ************************************ 00:05:57.807 START TEST event 00:05:57.807 ************************************ 00:05:57.807 14:46:30 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:05:57.807 * Looking for test storage... 00:05:57.807 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:05:57.807 14:46:30 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:05:57.807 14:46:30 -- common/autotest_common.sh@1690 -- # lcov --version 00:05:57.807 14:46:30 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:05:57.807 14:46:30 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:05:57.807 14:46:30 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:05:57.807 14:46:30 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:05:57.807 14:46:30 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:05:57.807 14:46:30 -- scripts/common.sh@335 -- # IFS=.-: 00:05:57.807 14:46:30 -- scripts/common.sh@335 -- # read -ra ver1 00:05:57.807 14:46:30 -- scripts/common.sh@336 -- # IFS=.-: 00:05:57.807 14:46:30 -- scripts/common.sh@336 -- # read -ra ver2 00:05:57.807 14:46:30 -- scripts/common.sh@337 -- # local 'op=<' 00:05:57.807 14:46:30 -- scripts/common.sh@339 -- # ver1_l=2 00:05:57.807 14:46:30 -- scripts/common.sh@340 -- # ver2_l=1 00:05:57.807 14:46:30 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:05:57.807 14:46:30 -- scripts/common.sh@343 -- # case "$op" in 00:05:57.807 14:46:30 -- scripts/common.sh@344 -- # : 1 00:05:57.807 14:46:30 -- scripts/common.sh@363 -- # (( v = 0 )) 00:05:57.807 14:46:30 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:57.807 14:46:30 -- scripts/common.sh@364 -- # decimal 1 00:05:57.807 14:46:30 -- scripts/common.sh@352 -- # local d=1 00:05:57.807 14:46:30 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:57.807 14:46:30 -- scripts/common.sh@354 -- # echo 1 00:05:57.807 14:46:30 -- scripts/common.sh@364 -- # ver1[v]=1 00:05:57.807 14:46:30 -- scripts/common.sh@365 -- # decimal 2 00:05:57.807 14:46:30 -- scripts/common.sh@352 -- # local d=2 00:05:57.807 14:46:30 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:57.807 14:46:30 -- scripts/common.sh@354 -- # echo 2 00:05:57.807 14:46:30 -- scripts/common.sh@365 -- # ver2[v]=2 00:05:57.807 14:46:30 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:05:57.807 14:46:30 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:05:57.807 14:46:30 -- scripts/common.sh@367 -- # return 0 00:05:57.807 14:46:30 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:57.807 14:46:30 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:05:57.807 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:57.807 --rc genhtml_branch_coverage=1 00:05:57.807 --rc genhtml_function_coverage=1 00:05:57.807 --rc genhtml_legend=1 00:05:57.807 --rc geninfo_all_blocks=1 00:05:57.808 --rc geninfo_unexecuted_blocks=1 00:05:57.808 00:05:57.808 ' 00:05:57.808 14:46:30 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:05:57.808 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:57.808 --rc genhtml_branch_coverage=1 00:05:57.808 --rc genhtml_function_coverage=1 00:05:57.808 --rc genhtml_legend=1 00:05:57.808 --rc geninfo_all_blocks=1 00:05:57.808 --rc geninfo_unexecuted_blocks=1 00:05:57.808 00:05:57.808 ' 00:05:57.808 14:46:30 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:05:57.808 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:57.808 --rc genhtml_branch_coverage=1 00:05:57.808 --rc genhtml_function_coverage=1 00:05:57.808 --rc genhtml_legend=1 00:05:57.808 --rc geninfo_all_blocks=1 00:05:57.808 --rc geninfo_unexecuted_blocks=1 00:05:57.808 00:05:57.808 ' 00:05:57.808 14:46:30 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:05:57.808 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:57.808 --rc genhtml_branch_coverage=1 00:05:57.808 --rc genhtml_function_coverage=1 00:05:57.808 --rc genhtml_legend=1 00:05:57.808 --rc geninfo_all_blocks=1 00:05:57.808 --rc geninfo_unexecuted_blocks=1 00:05:57.808 00:05:57.808 ' 00:05:57.808 14:46:30 -- event/event.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:05:57.808 14:46:30 -- bdev/nbd_common.sh@6 -- # set -e 00:05:57.808 14:46:30 -- event/event.sh@45 -- # run_test event_perf /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:57.808 14:46:30 -- common/autotest_common.sh@1087 -- # '[' 6 -le 1 ']' 00:05:57.808 14:46:30 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:57.808 14:46:30 -- common/autotest_common.sh@10 -- # set +x 00:05:57.808 ************************************ 00:05:57.808 START TEST event_perf 00:05:57.808 ************************************ 00:05:57.808 14:46:30 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:58.067 Running I/O for 1 seconds...[2024-12-01 14:46:30.922746] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:05:58.067 [2024-12-01 14:46:30.923464] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68685 ] 00:05:58.067 [2024-12-01 14:46:31.059506] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:58.067 [2024-12-01 14:46:31.111779] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:58.067 [2024-12-01 14:46:31.111906] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:05:58.067 Running I/O for 1 seconds...[2024-12-01 14:46:31.112953] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:05:58.067 [2024-12-01 14:46:31.112966] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:59.443 00:05:59.443 lcore 0: 128313 00:05:59.443 lcore 1: 128312 00:05:59.443 lcore 2: 128313 00:05:59.443 lcore 3: 128315 00:05:59.443 done. 00:05:59.443 00:05:59.443 real 0m1.257s 00:05:59.443 user 0m4.085s 00:05:59.443 sys 0m0.051s 00:05:59.443 14:46:32 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:59.443 14:46:32 -- common/autotest_common.sh@10 -- # set +x 00:05:59.443 ************************************ 00:05:59.443 END TEST event_perf 00:05:59.443 ************************************ 00:05:59.443 14:46:32 -- event/event.sh@46 -- # run_test event_reactor /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:05:59.443 14:46:32 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:05:59.443 14:46:32 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:59.443 14:46:32 -- common/autotest_common.sh@10 -- # set +x 00:05:59.443 ************************************ 00:05:59.443 START TEST event_reactor 00:05:59.443 ************************************ 00:05:59.443 14:46:32 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:05:59.443 [2024-12-01 14:46:32.233355] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:05:59.443 [2024-12-01 14:46:32.233470] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68718 ] 00:05:59.444 [2024-12-01 14:46:32.369920] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:59.444 [2024-12-01 14:46:32.420267] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:00.380 test_start 00:06:00.380 oneshot 00:06:00.380 tick 100 00:06:00.380 tick 100 00:06:00.380 tick 250 00:06:00.380 tick 100 00:06:00.380 tick 100 00:06:00.380 tick 100 00:06:00.380 tick 250 00:06:00.380 tick 500 00:06:00.380 tick 100 00:06:00.380 tick 100 00:06:00.380 tick 250 00:06:00.380 tick 100 00:06:00.380 tick 100 00:06:00.380 test_end 00:06:00.380 00:06:00.380 real 0m1.248s 00:06:00.380 user 0m1.096s 00:06:00.380 sys 0m0.048s 00:06:00.380 14:46:33 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:00.380 ************************************ 00:06:00.380 END TEST event_reactor 00:06:00.380 ************************************ 00:06:00.380 14:46:33 -- common/autotest_common.sh@10 -- # set +x 00:06:00.639 14:46:33 -- event/event.sh@47 -- # run_test event_reactor_perf /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:06:00.639 14:46:33 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:06:00.639 14:46:33 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:00.639 14:46:33 -- common/autotest_common.sh@10 -- # set +x 00:06:00.639 ************************************ 00:06:00.639 START TEST event_reactor_perf 00:06:00.639 ************************************ 00:06:00.639 14:46:33 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:06:00.639 [2024-12-01 14:46:33.534687] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:00.639 [2024-12-01 14:46:33.534802] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68748 ] 00:06:00.639 [2024-12-01 14:46:33.669851] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:00.639 [2024-12-01 14:46:33.724038] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:02.018 test_start 00:06:02.018 test_end 00:06:02.018 Performance: 473340 events per second 00:06:02.018 00:06:02.018 real 0m1.254s 00:06:02.018 user 0m1.102s 00:06:02.018 sys 0m0.047s 00:06:02.018 14:46:34 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:02.018 14:46:34 -- common/autotest_common.sh@10 -- # set +x 00:06:02.018 ************************************ 00:06:02.018 END TEST event_reactor_perf 00:06:02.018 ************************************ 00:06:02.018 14:46:34 -- event/event.sh@49 -- # uname -s 00:06:02.018 14:46:34 -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:06:02.018 14:46:34 -- event/event.sh@50 -- # run_test event_scheduler /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:06:02.018 14:46:34 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:02.018 14:46:34 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:02.018 14:46:34 -- common/autotest_common.sh@10 -- # set +x 00:06:02.018 ************************************ 00:06:02.018 START TEST event_scheduler 00:06:02.018 ************************************ 00:06:02.018 14:46:34 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:06:02.018 * Looking for test storage... 00:06:02.018 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event/scheduler 00:06:02.018 14:46:34 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:06:02.018 14:46:34 -- common/autotest_common.sh@1690 -- # lcov --version 00:06:02.018 14:46:34 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:06:02.018 14:46:35 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:06:02.018 14:46:35 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:06:02.018 14:46:35 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:06:02.018 14:46:35 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:06:02.018 14:46:35 -- scripts/common.sh@335 -- # IFS=.-: 00:06:02.018 14:46:35 -- scripts/common.sh@335 -- # read -ra ver1 00:06:02.018 14:46:35 -- scripts/common.sh@336 -- # IFS=.-: 00:06:02.018 14:46:35 -- scripts/common.sh@336 -- # read -ra ver2 00:06:02.018 14:46:35 -- scripts/common.sh@337 -- # local 'op=<' 00:06:02.018 14:46:35 -- scripts/common.sh@339 -- # ver1_l=2 00:06:02.018 14:46:35 -- scripts/common.sh@340 -- # ver2_l=1 00:06:02.018 14:46:35 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:06:02.018 14:46:35 -- scripts/common.sh@343 -- # case "$op" in 00:06:02.018 14:46:35 -- scripts/common.sh@344 -- # : 1 00:06:02.018 14:46:35 -- scripts/common.sh@363 -- # (( v = 0 )) 00:06:02.018 14:46:35 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:02.018 14:46:35 -- scripts/common.sh@364 -- # decimal 1 00:06:02.018 14:46:35 -- scripts/common.sh@352 -- # local d=1 00:06:02.018 14:46:35 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:02.018 14:46:35 -- scripts/common.sh@354 -- # echo 1 00:06:02.018 14:46:35 -- scripts/common.sh@364 -- # ver1[v]=1 00:06:02.018 14:46:35 -- scripts/common.sh@365 -- # decimal 2 00:06:02.018 14:46:35 -- scripts/common.sh@352 -- # local d=2 00:06:02.018 14:46:35 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:02.018 14:46:35 -- scripts/common.sh@354 -- # echo 2 00:06:02.018 14:46:35 -- scripts/common.sh@365 -- # ver2[v]=2 00:06:02.018 14:46:35 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:06:02.018 14:46:35 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:06:02.018 14:46:35 -- scripts/common.sh@367 -- # return 0 00:06:02.018 14:46:35 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:02.018 14:46:35 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:06:02.018 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:02.018 --rc genhtml_branch_coverage=1 00:06:02.018 --rc genhtml_function_coverage=1 00:06:02.018 --rc genhtml_legend=1 00:06:02.018 --rc geninfo_all_blocks=1 00:06:02.018 --rc geninfo_unexecuted_blocks=1 00:06:02.018 00:06:02.018 ' 00:06:02.018 14:46:35 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:06:02.018 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:02.018 --rc genhtml_branch_coverage=1 00:06:02.018 --rc genhtml_function_coverage=1 00:06:02.018 --rc genhtml_legend=1 00:06:02.018 --rc geninfo_all_blocks=1 00:06:02.018 --rc geninfo_unexecuted_blocks=1 00:06:02.018 00:06:02.018 ' 00:06:02.018 14:46:35 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:06:02.018 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:02.018 --rc genhtml_branch_coverage=1 00:06:02.018 --rc genhtml_function_coverage=1 00:06:02.018 --rc genhtml_legend=1 00:06:02.018 --rc geninfo_all_blocks=1 00:06:02.018 --rc geninfo_unexecuted_blocks=1 00:06:02.018 00:06:02.018 ' 00:06:02.018 14:46:35 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:06:02.018 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:02.018 --rc genhtml_branch_coverage=1 00:06:02.018 --rc genhtml_function_coverage=1 00:06:02.018 --rc genhtml_legend=1 00:06:02.018 --rc geninfo_all_blocks=1 00:06:02.019 --rc geninfo_unexecuted_blocks=1 00:06:02.019 00:06:02.019 ' 00:06:02.019 14:46:35 -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:06:02.019 14:46:35 -- scheduler/scheduler.sh@35 -- # scheduler_pid=68822 00:06:02.019 14:46:35 -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:06:02.019 14:46:35 -- scheduler/scheduler.sh@34 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:06:02.019 14:46:35 -- scheduler/scheduler.sh@37 -- # waitforlisten 68822 00:06:02.019 14:46:35 -- common/autotest_common.sh@829 -- # '[' -z 68822 ']' 00:06:02.019 14:46:35 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:02.019 14:46:35 -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:02.019 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:02.019 14:46:35 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:02.019 14:46:35 -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:02.019 14:46:35 -- common/autotest_common.sh@10 -- # set +x 00:06:02.019 [2024-12-01 14:46:35.078738] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:02.019 [2024-12-01 14:46:35.078887] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68822 ] 00:06:02.277 [2024-12-01 14:46:35.224228] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:02.277 [2024-12-01 14:46:35.326597] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:02.277 [2024-12-01 14:46:35.326706] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:02.277 [2024-12-01 14:46:35.326856] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:06:02.277 [2024-12-01 14:46:35.326881] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:03.214 14:46:36 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:03.214 14:46:36 -- common/autotest_common.sh@862 -- # return 0 00:06:03.214 14:46:36 -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:06:03.214 14:46:36 -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:03.214 14:46:36 -- common/autotest_common.sh@10 -- # set +x 00:06:03.214 POWER: Env isn't set yet! 00:06:03.214 POWER: Attempting to initialise ACPI cpufreq power management... 00:06:03.214 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:06:03.214 POWER: Cannot set governor of lcore 0 to userspace 00:06:03.214 POWER: Attempting to initialise PSTAT power management... 00:06:03.214 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:06:03.214 POWER: Cannot set governor of lcore 0 to performance 00:06:03.214 POWER: Attempting to initialise AMD PSTATE power management... 00:06:03.214 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:06:03.214 POWER: Cannot set governor of lcore 0 to userspace 00:06:03.214 POWER: Attempting to initialise CPPC power management... 00:06:03.214 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:06:03.214 POWER: Cannot set governor of lcore 0 to userspace 00:06:03.214 POWER: Attempting to initialise VM power management... 00:06:03.214 GUEST_CHANNEL: Unable to connect to '/dev/virtio-ports/virtio.serial.port.poweragent.0' with error No such file or directory 00:06:03.214 POWER: Unable to set Power Management Environment for lcore 0 00:06:03.214 [2024-12-01 14:46:36.096430] dpdk_governor.c: 88:_init_core: *ERROR*: Failed to initialize on core0 00:06:03.214 [2024-12-01 14:46:36.096443] dpdk_governor.c: 118:_init: *ERROR*: Failed to initialize on core0 00:06:03.214 [2024-12-01 14:46:36.096451] scheduler_dynamic.c: 238:init: *NOTICE*: Unable to initialize dpdk governor 00:06:03.214 [2024-12-01 14:46:36.096463] scheduler_dynamic.c: 387:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:06:03.214 [2024-12-01 14:46:36.096470] scheduler_dynamic.c: 389:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:06:03.214 [2024-12-01 14:46:36.096476] scheduler_dynamic.c: 391:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:06:03.214 14:46:36 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:03.214 14:46:36 -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:06:03.214 14:46:36 -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:03.214 14:46:36 -- common/autotest_common.sh@10 -- # set +x 00:06:03.214 [2024-12-01 14:46:36.213390] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:06:03.214 14:46:36 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:03.214 14:46:36 -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:06:03.214 14:46:36 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:03.214 14:46:36 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:03.214 14:46:36 -- common/autotest_common.sh@10 -- # set +x 00:06:03.214 ************************************ 00:06:03.214 START TEST scheduler_create_thread 00:06:03.214 ************************************ 00:06:03.214 14:46:36 -- common/autotest_common.sh@1114 -- # scheduler_create_thread 00:06:03.214 14:46:36 -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:06:03.214 14:46:36 -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:03.214 14:46:36 -- common/autotest_common.sh@10 -- # set +x 00:06:03.214 2 00:06:03.214 14:46:36 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:03.215 14:46:36 -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:06:03.215 14:46:36 -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:03.215 14:46:36 -- common/autotest_common.sh@10 -- # set +x 00:06:03.215 3 00:06:03.215 14:46:36 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:03.215 14:46:36 -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:06:03.215 14:46:36 -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:03.215 14:46:36 -- common/autotest_common.sh@10 -- # set +x 00:06:03.215 4 00:06:03.215 14:46:36 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:03.215 14:46:36 -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:06:03.215 14:46:36 -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:03.215 14:46:36 -- common/autotest_common.sh@10 -- # set +x 00:06:03.215 5 00:06:03.215 14:46:36 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:03.215 14:46:36 -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:06:03.215 14:46:36 -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:03.215 14:46:36 -- common/autotest_common.sh@10 -- # set +x 00:06:03.215 6 00:06:03.215 14:46:36 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:03.215 14:46:36 -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:06:03.215 14:46:36 -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:03.215 14:46:36 -- common/autotest_common.sh@10 -- # set +x 00:06:03.215 7 00:06:03.215 14:46:36 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:03.215 14:46:36 -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:06:03.215 14:46:36 -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:03.215 14:46:36 -- common/autotest_common.sh@10 -- # set +x 00:06:03.215 8 00:06:03.215 14:46:36 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:03.215 14:46:36 -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:06:03.215 14:46:36 -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:03.215 14:46:36 -- common/autotest_common.sh@10 -- # set +x 00:06:03.215 9 00:06:03.215 14:46:36 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:03.215 14:46:36 -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:06:03.215 14:46:36 -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:03.215 14:46:36 -- common/autotest_common.sh@10 -- # set +x 00:06:03.215 10 00:06:03.215 14:46:36 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:03.215 14:46:36 -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:06:03.215 14:46:36 -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:03.215 14:46:36 -- common/autotest_common.sh@10 -- # set +x 00:06:03.473 14:46:36 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:03.474 14:46:36 -- scheduler/scheduler.sh@22 -- # thread_id=11 00:06:03.474 14:46:36 -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:06:03.474 14:46:36 -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:03.474 14:46:36 -- common/autotest_common.sh@10 -- # set +x 00:06:04.409 14:46:37 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:04.409 14:46:37 -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:06:04.409 14:46:37 -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:04.409 14:46:37 -- common/autotest_common.sh@10 -- # set +x 00:06:05.788 14:46:38 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:05.788 14:46:38 -- scheduler/scheduler.sh@25 -- # thread_id=12 00:06:05.788 14:46:38 -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:06:05.788 14:46:38 -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:05.788 14:46:38 -- common/autotest_common.sh@10 -- # set +x 00:06:06.722 14:46:39 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:06.722 00:06:06.722 real 0m3.375s 00:06:06.722 user 0m0.013s 00:06:06.722 sys 0m0.009s 00:06:06.722 14:46:39 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:06.722 14:46:39 -- common/autotest_common.sh@10 -- # set +x 00:06:06.722 ************************************ 00:06:06.722 END TEST scheduler_create_thread 00:06:06.722 ************************************ 00:06:06.722 14:46:39 -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:06:06.722 14:46:39 -- scheduler/scheduler.sh@46 -- # killprocess 68822 00:06:06.722 14:46:39 -- common/autotest_common.sh@936 -- # '[' -z 68822 ']' 00:06:06.722 14:46:39 -- common/autotest_common.sh@940 -- # kill -0 68822 00:06:06.722 14:46:39 -- common/autotest_common.sh@941 -- # uname 00:06:06.722 14:46:39 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:06:06.722 14:46:39 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 68822 00:06:06.722 killing process with pid 68822 00:06:06.722 14:46:39 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:06:06.722 14:46:39 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:06:06.722 14:46:39 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 68822' 00:06:06.722 14:46:39 -- common/autotest_common.sh@955 -- # kill 68822 00:06:06.722 14:46:39 -- common/autotest_common.sh@960 -- # wait 68822 00:06:06.980 [2024-12-01 14:46:39.981272] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:06:07.239 00:06:07.239 real 0m5.390s 00:06:07.239 user 0m11.010s 00:06:07.239 sys 0m0.470s 00:06:07.239 14:46:40 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:07.239 14:46:40 -- common/autotest_common.sh@10 -- # set +x 00:06:07.239 ************************************ 00:06:07.239 END TEST event_scheduler 00:06:07.239 ************************************ 00:06:07.239 14:46:40 -- event/event.sh@51 -- # modprobe -n nbd 00:06:07.239 14:46:40 -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:06:07.239 14:46:40 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:07.239 14:46:40 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:07.239 14:46:40 -- common/autotest_common.sh@10 -- # set +x 00:06:07.239 ************************************ 00:06:07.239 START TEST app_repeat 00:06:07.239 ************************************ 00:06:07.239 14:46:40 -- common/autotest_common.sh@1114 -- # app_repeat_test 00:06:07.239 14:46:40 -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:07.239 14:46:40 -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:07.239 14:46:40 -- event/event.sh@13 -- # local nbd_list 00:06:07.239 14:46:40 -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:07.239 14:46:40 -- event/event.sh@14 -- # local bdev_list 00:06:07.239 14:46:40 -- event/event.sh@15 -- # local repeat_times=4 00:06:07.239 14:46:40 -- event/event.sh@17 -- # modprobe nbd 00:06:07.239 14:46:40 -- event/event.sh@19 -- # repeat_pid=68945 00:06:07.239 14:46:40 -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:06:07.239 14:46:40 -- event/event.sh@18 -- # /home/vagrant/spdk_repo/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:06:07.239 Process app_repeat pid: 68945 00:06:07.239 14:46:40 -- event/event.sh@21 -- # echo 'Process app_repeat pid: 68945' 00:06:07.239 14:46:40 -- event/event.sh@23 -- # for i in {0..2} 00:06:07.239 spdk_app_start Round 0 00:06:07.239 14:46:40 -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:06:07.239 14:46:40 -- event/event.sh@25 -- # waitforlisten 68945 /var/tmp/spdk-nbd.sock 00:06:07.239 14:46:40 -- common/autotest_common.sh@829 -- # '[' -z 68945 ']' 00:06:07.239 14:46:40 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:07.239 14:46:40 -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:07.239 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:07.239 14:46:40 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:07.239 14:46:40 -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:07.239 14:46:40 -- common/autotest_common.sh@10 -- # set +x 00:06:07.239 [2024-12-01 14:46:40.312818] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:07.239 [2024-12-01 14:46:40.312932] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68945 ] 00:06:07.498 [2024-12-01 14:46:40.449696] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:07.498 [2024-12-01 14:46:40.495840] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:07.498 [2024-12-01 14:46:40.495857] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:08.434 14:46:41 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:08.434 14:46:41 -- common/autotest_common.sh@862 -- # return 0 00:06:08.434 14:46:41 -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:08.434 Malloc0 00:06:08.434 14:46:41 -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:08.693 Malloc1 00:06:08.693 14:46:41 -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:08.693 14:46:41 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:08.693 14:46:41 -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:08.693 14:46:41 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:08.693 14:46:41 -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:08.693 14:46:41 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:08.693 14:46:41 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:08.693 14:46:41 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:08.693 14:46:41 -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:08.693 14:46:41 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:08.693 14:46:41 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:08.693 14:46:41 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:08.693 14:46:41 -- bdev/nbd_common.sh@12 -- # local i 00:06:08.693 14:46:41 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:08.693 14:46:41 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:08.693 14:46:41 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:08.951 /dev/nbd0 00:06:08.951 14:46:41 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:08.951 14:46:41 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:08.951 14:46:41 -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:06:08.951 14:46:41 -- common/autotest_common.sh@867 -- # local i 00:06:08.951 14:46:41 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:06:08.951 14:46:41 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:06:08.951 14:46:41 -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:06:08.951 14:46:41 -- common/autotest_common.sh@871 -- # break 00:06:08.951 14:46:41 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:06:08.951 14:46:41 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:06:08.951 14:46:41 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:08.951 1+0 records in 00:06:08.951 1+0 records out 00:06:08.951 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000348431 s, 11.8 MB/s 00:06:08.951 14:46:41 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:08.951 14:46:41 -- common/autotest_common.sh@884 -- # size=4096 00:06:08.951 14:46:41 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:08.951 14:46:41 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:06:08.951 14:46:42 -- common/autotest_common.sh@887 -- # return 0 00:06:08.951 14:46:42 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:08.951 14:46:42 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:08.951 14:46:42 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:09.210 /dev/nbd1 00:06:09.210 14:46:42 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:09.210 14:46:42 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:09.210 14:46:42 -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:06:09.210 14:46:42 -- common/autotest_common.sh@867 -- # local i 00:06:09.210 14:46:42 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:06:09.210 14:46:42 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:06:09.210 14:46:42 -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:06:09.210 14:46:42 -- common/autotest_common.sh@871 -- # break 00:06:09.210 14:46:42 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:06:09.210 14:46:42 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:06:09.210 14:46:42 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:09.210 1+0 records in 00:06:09.210 1+0 records out 00:06:09.210 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000336129 s, 12.2 MB/s 00:06:09.210 14:46:42 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:09.210 14:46:42 -- common/autotest_common.sh@884 -- # size=4096 00:06:09.210 14:46:42 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:09.210 14:46:42 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:06:09.210 14:46:42 -- common/autotest_common.sh@887 -- # return 0 00:06:09.210 14:46:42 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:09.210 14:46:42 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:09.210 14:46:42 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:09.210 14:46:42 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:09.210 14:46:42 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:09.468 14:46:42 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:09.468 { 00:06:09.468 "bdev_name": "Malloc0", 00:06:09.468 "nbd_device": "/dev/nbd0" 00:06:09.468 }, 00:06:09.468 { 00:06:09.468 "bdev_name": "Malloc1", 00:06:09.468 "nbd_device": "/dev/nbd1" 00:06:09.468 } 00:06:09.468 ]' 00:06:09.468 14:46:42 -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:09.468 { 00:06:09.468 "bdev_name": "Malloc0", 00:06:09.468 "nbd_device": "/dev/nbd0" 00:06:09.468 }, 00:06:09.468 { 00:06:09.468 "bdev_name": "Malloc1", 00:06:09.468 "nbd_device": "/dev/nbd1" 00:06:09.468 } 00:06:09.468 ]' 00:06:09.468 14:46:42 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:09.468 14:46:42 -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:09.468 /dev/nbd1' 00:06:09.468 14:46:42 -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:09.468 /dev/nbd1' 00:06:09.468 14:46:42 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:09.468 14:46:42 -- bdev/nbd_common.sh@65 -- # count=2 00:06:09.468 14:46:42 -- bdev/nbd_common.sh@66 -- # echo 2 00:06:09.468 14:46:42 -- bdev/nbd_common.sh@95 -- # count=2 00:06:09.468 14:46:42 -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:09.468 14:46:42 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:09.468 14:46:42 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:09.468 14:46:42 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:09.468 14:46:42 -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:09.468 14:46:42 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:09.469 14:46:42 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:09.469 14:46:42 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:09.469 256+0 records in 00:06:09.469 256+0 records out 00:06:09.469 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00851844 s, 123 MB/s 00:06:09.469 14:46:42 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:09.469 14:46:42 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:09.727 256+0 records in 00:06:09.727 256+0 records out 00:06:09.727 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0243393 s, 43.1 MB/s 00:06:09.727 14:46:42 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:09.727 14:46:42 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:09.727 256+0 records in 00:06:09.727 256+0 records out 00:06:09.727 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0255243 s, 41.1 MB/s 00:06:09.727 14:46:42 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:09.727 14:46:42 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:09.727 14:46:42 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:09.727 14:46:42 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:09.727 14:46:42 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:09.727 14:46:42 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:09.727 14:46:42 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:09.727 14:46:42 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:09.727 14:46:42 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:06:09.727 14:46:42 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:09.727 14:46:42 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:06:09.727 14:46:42 -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:09.727 14:46:42 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:09.727 14:46:42 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:09.727 14:46:42 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:09.727 14:46:42 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:09.727 14:46:42 -- bdev/nbd_common.sh@51 -- # local i 00:06:09.727 14:46:42 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:09.727 14:46:42 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:09.986 14:46:42 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:09.986 14:46:42 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:09.986 14:46:42 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:09.986 14:46:42 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:09.986 14:46:42 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:09.986 14:46:42 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:09.986 14:46:42 -- bdev/nbd_common.sh@41 -- # break 00:06:09.986 14:46:42 -- bdev/nbd_common.sh@45 -- # return 0 00:06:09.986 14:46:42 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:09.986 14:46:42 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:10.244 14:46:43 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:10.244 14:46:43 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:10.244 14:46:43 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:10.244 14:46:43 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:10.244 14:46:43 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:10.244 14:46:43 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:10.244 14:46:43 -- bdev/nbd_common.sh@41 -- # break 00:06:10.244 14:46:43 -- bdev/nbd_common.sh@45 -- # return 0 00:06:10.244 14:46:43 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:10.244 14:46:43 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:10.244 14:46:43 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:10.506 14:46:43 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:10.506 14:46:43 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:10.506 14:46:43 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:10.506 14:46:43 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:10.506 14:46:43 -- bdev/nbd_common.sh@65 -- # echo '' 00:06:10.506 14:46:43 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:10.506 14:46:43 -- bdev/nbd_common.sh@65 -- # true 00:06:10.506 14:46:43 -- bdev/nbd_common.sh@65 -- # count=0 00:06:10.506 14:46:43 -- bdev/nbd_common.sh@66 -- # echo 0 00:06:10.506 14:46:43 -- bdev/nbd_common.sh@104 -- # count=0 00:06:10.506 14:46:43 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:10.506 14:46:43 -- bdev/nbd_common.sh@109 -- # return 0 00:06:10.506 14:46:43 -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:10.799 14:46:43 -- event/event.sh@35 -- # sleep 3 00:06:11.069 [2024-12-01 14:46:43.972426] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:11.069 [2024-12-01 14:46:44.012997] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:11.069 [2024-12-01 14:46:44.013016] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:11.069 [2024-12-01 14:46:44.065065] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:11.069 [2024-12-01 14:46:44.065120] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:14.359 14:46:46 -- event/event.sh@23 -- # for i in {0..2} 00:06:14.359 spdk_app_start Round 1 00:06:14.359 14:46:46 -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:06:14.359 14:46:46 -- event/event.sh@25 -- # waitforlisten 68945 /var/tmp/spdk-nbd.sock 00:06:14.359 14:46:46 -- common/autotest_common.sh@829 -- # '[' -z 68945 ']' 00:06:14.359 14:46:46 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:14.359 14:46:46 -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:14.359 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:14.359 14:46:46 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:14.359 14:46:46 -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:14.359 14:46:46 -- common/autotest_common.sh@10 -- # set +x 00:06:14.359 14:46:47 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:14.359 14:46:47 -- common/autotest_common.sh@862 -- # return 0 00:06:14.359 14:46:47 -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:14.359 Malloc0 00:06:14.359 14:46:47 -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:14.618 Malloc1 00:06:14.618 14:46:47 -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:14.618 14:46:47 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:14.618 14:46:47 -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:14.618 14:46:47 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:14.618 14:46:47 -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:14.618 14:46:47 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:14.618 14:46:47 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:14.618 14:46:47 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:14.618 14:46:47 -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:14.618 14:46:47 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:14.618 14:46:47 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:14.618 14:46:47 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:14.618 14:46:47 -- bdev/nbd_common.sh@12 -- # local i 00:06:14.618 14:46:47 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:14.618 14:46:47 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:14.618 14:46:47 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:14.877 /dev/nbd0 00:06:14.877 14:46:47 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:14.877 14:46:47 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:14.877 14:46:47 -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:06:14.877 14:46:47 -- common/autotest_common.sh@867 -- # local i 00:06:14.877 14:46:47 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:06:14.877 14:46:47 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:06:14.877 14:46:47 -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:06:14.877 14:46:47 -- common/autotest_common.sh@871 -- # break 00:06:14.877 14:46:47 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:06:14.877 14:46:47 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:06:14.877 14:46:47 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:14.877 1+0 records in 00:06:14.877 1+0 records out 00:06:14.877 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000304025 s, 13.5 MB/s 00:06:14.877 14:46:47 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:14.877 14:46:47 -- common/autotest_common.sh@884 -- # size=4096 00:06:14.877 14:46:47 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:14.877 14:46:47 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:06:14.877 14:46:47 -- common/autotest_common.sh@887 -- # return 0 00:06:14.877 14:46:47 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:14.877 14:46:47 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:14.877 14:46:47 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:15.136 /dev/nbd1 00:06:15.136 14:46:48 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:15.136 14:46:48 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:15.136 14:46:48 -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:06:15.136 14:46:48 -- common/autotest_common.sh@867 -- # local i 00:06:15.136 14:46:48 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:06:15.136 14:46:48 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:06:15.136 14:46:48 -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:06:15.136 14:46:48 -- common/autotest_common.sh@871 -- # break 00:06:15.136 14:46:48 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:06:15.136 14:46:48 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:06:15.136 14:46:48 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:15.136 1+0 records in 00:06:15.136 1+0 records out 00:06:15.136 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00258912 s, 1.6 MB/s 00:06:15.136 14:46:48 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:15.136 14:46:48 -- common/autotest_common.sh@884 -- # size=4096 00:06:15.136 14:46:48 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:15.136 14:46:48 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:06:15.136 14:46:48 -- common/autotest_common.sh@887 -- # return 0 00:06:15.136 14:46:48 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:15.136 14:46:48 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:15.136 14:46:48 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:15.136 14:46:48 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:15.136 14:46:48 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:15.395 14:46:48 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:15.395 { 00:06:15.395 "bdev_name": "Malloc0", 00:06:15.395 "nbd_device": "/dev/nbd0" 00:06:15.395 }, 00:06:15.395 { 00:06:15.395 "bdev_name": "Malloc1", 00:06:15.395 "nbd_device": "/dev/nbd1" 00:06:15.395 } 00:06:15.395 ]' 00:06:15.395 14:46:48 -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:15.395 { 00:06:15.395 "bdev_name": "Malloc0", 00:06:15.395 "nbd_device": "/dev/nbd0" 00:06:15.395 }, 00:06:15.395 { 00:06:15.395 "bdev_name": "Malloc1", 00:06:15.395 "nbd_device": "/dev/nbd1" 00:06:15.395 } 00:06:15.395 ]' 00:06:15.395 14:46:48 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:15.395 14:46:48 -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:15.395 /dev/nbd1' 00:06:15.395 14:46:48 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:15.395 14:46:48 -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:15.395 /dev/nbd1' 00:06:15.395 14:46:48 -- bdev/nbd_common.sh@65 -- # count=2 00:06:15.395 14:46:48 -- bdev/nbd_common.sh@66 -- # echo 2 00:06:15.395 14:46:48 -- bdev/nbd_common.sh@95 -- # count=2 00:06:15.395 14:46:48 -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:15.395 14:46:48 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:15.395 14:46:48 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:15.395 14:46:48 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:15.395 14:46:48 -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:15.395 14:46:48 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:15.395 14:46:48 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:15.395 14:46:48 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:15.395 256+0 records in 00:06:15.395 256+0 records out 00:06:15.395 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00963254 s, 109 MB/s 00:06:15.395 14:46:48 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:15.395 14:46:48 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:15.655 256+0 records in 00:06:15.655 256+0 records out 00:06:15.655 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.023597 s, 44.4 MB/s 00:06:15.655 14:46:48 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:15.655 14:46:48 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:15.655 256+0 records in 00:06:15.655 256+0 records out 00:06:15.655 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0251315 s, 41.7 MB/s 00:06:15.655 14:46:48 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:15.655 14:46:48 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:15.655 14:46:48 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:15.655 14:46:48 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:15.655 14:46:48 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:15.655 14:46:48 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:15.655 14:46:48 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:15.655 14:46:48 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:15.655 14:46:48 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:06:15.655 14:46:48 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:15.655 14:46:48 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:06:15.655 14:46:48 -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:15.655 14:46:48 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:15.655 14:46:48 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:15.655 14:46:48 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:15.655 14:46:48 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:15.655 14:46:48 -- bdev/nbd_common.sh@51 -- # local i 00:06:15.655 14:46:48 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:15.655 14:46:48 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:15.914 14:46:48 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:15.914 14:46:48 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:15.914 14:46:48 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:15.914 14:46:48 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:15.914 14:46:48 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:15.914 14:46:48 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:15.914 14:46:48 -- bdev/nbd_common.sh@41 -- # break 00:06:15.914 14:46:48 -- bdev/nbd_common.sh@45 -- # return 0 00:06:15.914 14:46:48 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:15.914 14:46:48 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:16.173 14:46:49 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:16.173 14:46:49 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:16.173 14:46:49 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:16.173 14:46:49 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:16.173 14:46:49 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:16.173 14:46:49 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:16.173 14:46:49 -- bdev/nbd_common.sh@41 -- # break 00:06:16.173 14:46:49 -- bdev/nbd_common.sh@45 -- # return 0 00:06:16.173 14:46:49 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:16.173 14:46:49 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:16.173 14:46:49 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:16.431 14:46:49 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:16.431 14:46:49 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:16.431 14:46:49 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:16.431 14:46:49 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:16.431 14:46:49 -- bdev/nbd_common.sh@65 -- # echo '' 00:06:16.431 14:46:49 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:16.431 14:46:49 -- bdev/nbd_common.sh@65 -- # true 00:06:16.431 14:46:49 -- bdev/nbd_common.sh@65 -- # count=0 00:06:16.431 14:46:49 -- bdev/nbd_common.sh@66 -- # echo 0 00:06:16.431 14:46:49 -- bdev/nbd_common.sh@104 -- # count=0 00:06:16.431 14:46:49 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:16.431 14:46:49 -- bdev/nbd_common.sh@109 -- # return 0 00:06:16.431 14:46:49 -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:16.690 14:46:49 -- event/event.sh@35 -- # sleep 3 00:06:16.950 [2024-12-01 14:46:49.861889] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:16.950 [2024-12-01 14:46:49.902372] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:16.950 [2024-12-01 14:46:49.902389] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:16.950 [2024-12-01 14:46:49.952784] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:16.950 [2024-12-01 14:46:49.952842] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:20.235 14:46:52 -- event/event.sh@23 -- # for i in {0..2} 00:06:20.235 spdk_app_start Round 2 00:06:20.235 14:46:52 -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:06:20.235 14:46:52 -- event/event.sh@25 -- # waitforlisten 68945 /var/tmp/spdk-nbd.sock 00:06:20.235 14:46:52 -- common/autotest_common.sh@829 -- # '[' -z 68945 ']' 00:06:20.235 14:46:52 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:20.235 14:46:52 -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:20.235 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:20.235 14:46:52 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:20.235 14:46:52 -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:20.235 14:46:52 -- common/autotest_common.sh@10 -- # set +x 00:06:20.235 14:46:52 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:20.235 14:46:52 -- common/autotest_common.sh@862 -- # return 0 00:06:20.235 14:46:52 -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:20.235 Malloc0 00:06:20.235 14:46:53 -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:20.493 Malloc1 00:06:20.493 14:46:53 -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:20.493 14:46:53 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:20.493 14:46:53 -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:20.493 14:46:53 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:20.493 14:46:53 -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:20.493 14:46:53 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:20.493 14:46:53 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:20.493 14:46:53 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:20.493 14:46:53 -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:20.493 14:46:53 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:20.493 14:46:53 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:20.493 14:46:53 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:20.493 14:46:53 -- bdev/nbd_common.sh@12 -- # local i 00:06:20.493 14:46:53 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:20.493 14:46:53 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:20.493 14:46:53 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:20.751 /dev/nbd0 00:06:20.751 14:46:53 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:20.751 14:46:53 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:20.751 14:46:53 -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:06:20.751 14:46:53 -- common/autotest_common.sh@867 -- # local i 00:06:20.751 14:46:53 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:06:20.751 14:46:53 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:06:20.751 14:46:53 -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:06:20.751 14:46:53 -- common/autotest_common.sh@871 -- # break 00:06:20.751 14:46:53 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:06:20.751 14:46:53 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:06:20.751 14:46:53 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:20.751 1+0 records in 00:06:20.751 1+0 records out 00:06:20.751 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00020015 s, 20.5 MB/s 00:06:20.751 14:46:53 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:20.751 14:46:53 -- common/autotest_common.sh@884 -- # size=4096 00:06:20.751 14:46:53 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:20.751 14:46:53 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:06:20.751 14:46:53 -- common/autotest_common.sh@887 -- # return 0 00:06:20.751 14:46:53 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:20.751 14:46:53 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:20.751 14:46:53 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:21.008 /dev/nbd1 00:06:21.008 14:46:54 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:21.008 14:46:54 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:21.008 14:46:54 -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:06:21.008 14:46:54 -- common/autotest_common.sh@867 -- # local i 00:06:21.008 14:46:54 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:06:21.008 14:46:54 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:06:21.008 14:46:54 -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:06:21.008 14:46:54 -- common/autotest_common.sh@871 -- # break 00:06:21.008 14:46:54 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:06:21.008 14:46:54 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:06:21.008 14:46:54 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:21.008 1+0 records in 00:06:21.008 1+0 records out 00:06:21.008 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00023858 s, 17.2 MB/s 00:06:21.008 14:46:54 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:21.008 14:46:54 -- common/autotest_common.sh@884 -- # size=4096 00:06:21.008 14:46:54 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:21.008 14:46:54 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:06:21.008 14:46:54 -- common/autotest_common.sh@887 -- # return 0 00:06:21.008 14:46:54 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:21.008 14:46:54 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:21.008 14:46:54 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:21.008 14:46:54 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:21.008 14:46:54 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:21.574 14:46:54 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:21.574 { 00:06:21.574 "bdev_name": "Malloc0", 00:06:21.574 "nbd_device": "/dev/nbd0" 00:06:21.574 }, 00:06:21.574 { 00:06:21.574 "bdev_name": "Malloc1", 00:06:21.574 "nbd_device": "/dev/nbd1" 00:06:21.574 } 00:06:21.574 ]' 00:06:21.574 14:46:54 -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:21.574 { 00:06:21.574 "bdev_name": "Malloc0", 00:06:21.574 "nbd_device": "/dev/nbd0" 00:06:21.574 }, 00:06:21.574 { 00:06:21.574 "bdev_name": "Malloc1", 00:06:21.574 "nbd_device": "/dev/nbd1" 00:06:21.574 } 00:06:21.574 ]' 00:06:21.574 14:46:54 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:21.574 14:46:54 -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:21.574 /dev/nbd1' 00:06:21.574 14:46:54 -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:21.574 /dev/nbd1' 00:06:21.574 14:46:54 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:21.574 14:46:54 -- bdev/nbd_common.sh@65 -- # count=2 00:06:21.574 14:46:54 -- bdev/nbd_common.sh@66 -- # echo 2 00:06:21.574 14:46:54 -- bdev/nbd_common.sh@95 -- # count=2 00:06:21.574 14:46:54 -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:21.574 14:46:54 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:21.574 14:46:54 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:21.574 14:46:54 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:21.574 14:46:54 -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:21.574 14:46:54 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:21.574 14:46:54 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:21.574 14:46:54 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:21.574 256+0 records in 00:06:21.574 256+0 records out 00:06:21.574 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00918095 s, 114 MB/s 00:06:21.574 14:46:54 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:21.574 14:46:54 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:21.574 256+0 records in 00:06:21.574 256+0 records out 00:06:21.574 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0234963 s, 44.6 MB/s 00:06:21.574 14:46:54 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:21.574 14:46:54 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:21.574 256+0 records in 00:06:21.574 256+0 records out 00:06:21.574 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0244337 s, 42.9 MB/s 00:06:21.574 14:46:54 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:21.574 14:46:54 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:21.574 14:46:54 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:21.574 14:46:54 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:21.574 14:46:54 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:21.574 14:46:54 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:21.574 14:46:54 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:21.574 14:46:54 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:21.574 14:46:54 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:06:21.574 14:46:54 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:21.574 14:46:54 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:06:21.574 14:46:54 -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:21.574 14:46:54 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:21.574 14:46:54 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:21.574 14:46:54 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:21.574 14:46:54 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:21.574 14:46:54 -- bdev/nbd_common.sh@51 -- # local i 00:06:21.574 14:46:54 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:21.574 14:46:54 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:21.833 14:46:54 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:21.833 14:46:54 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:21.833 14:46:54 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:21.833 14:46:54 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:21.833 14:46:54 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:21.833 14:46:54 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:21.833 14:46:54 -- bdev/nbd_common.sh@41 -- # break 00:06:21.833 14:46:54 -- bdev/nbd_common.sh@45 -- # return 0 00:06:21.833 14:46:54 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:21.833 14:46:54 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:22.093 14:46:55 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:22.093 14:46:55 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:22.093 14:46:55 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:22.093 14:46:55 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:22.093 14:46:55 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:22.093 14:46:55 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:22.093 14:46:55 -- bdev/nbd_common.sh@41 -- # break 00:06:22.093 14:46:55 -- bdev/nbd_common.sh@45 -- # return 0 00:06:22.093 14:46:55 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:22.093 14:46:55 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:22.093 14:46:55 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:22.352 14:46:55 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:22.352 14:46:55 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:22.352 14:46:55 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:22.352 14:46:55 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:22.352 14:46:55 -- bdev/nbd_common.sh@65 -- # echo '' 00:06:22.352 14:46:55 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:22.352 14:46:55 -- bdev/nbd_common.sh@65 -- # true 00:06:22.352 14:46:55 -- bdev/nbd_common.sh@65 -- # count=0 00:06:22.352 14:46:55 -- bdev/nbd_common.sh@66 -- # echo 0 00:06:22.352 14:46:55 -- bdev/nbd_common.sh@104 -- # count=0 00:06:22.352 14:46:55 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:22.352 14:46:55 -- bdev/nbd_common.sh@109 -- # return 0 00:06:22.352 14:46:55 -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:22.610 14:46:55 -- event/event.sh@35 -- # sleep 3 00:06:22.869 [2024-12-01 14:46:55.763600] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:22.869 [2024-12-01 14:46:55.805349] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:22.869 [2024-12-01 14:46:55.805366] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:22.869 [2024-12-01 14:46:55.856202] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:22.869 [2024-12-01 14:46:55.856250] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:26.156 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:26.156 14:46:58 -- event/event.sh@38 -- # waitforlisten 68945 /var/tmp/spdk-nbd.sock 00:06:26.156 14:46:58 -- common/autotest_common.sh@829 -- # '[' -z 68945 ']' 00:06:26.156 14:46:58 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:26.156 14:46:58 -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:26.156 14:46:58 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:26.156 14:46:58 -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:26.156 14:46:58 -- common/autotest_common.sh@10 -- # set +x 00:06:26.156 14:46:58 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:26.156 14:46:58 -- common/autotest_common.sh@862 -- # return 0 00:06:26.156 14:46:58 -- event/event.sh@39 -- # killprocess 68945 00:06:26.156 14:46:58 -- common/autotest_common.sh@936 -- # '[' -z 68945 ']' 00:06:26.156 14:46:58 -- common/autotest_common.sh@940 -- # kill -0 68945 00:06:26.156 14:46:58 -- common/autotest_common.sh@941 -- # uname 00:06:26.156 14:46:58 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:06:26.156 14:46:58 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 68945 00:06:26.156 killing process with pid 68945 00:06:26.156 14:46:58 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:06:26.156 14:46:58 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:06:26.156 14:46:58 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 68945' 00:06:26.156 14:46:58 -- common/autotest_common.sh@955 -- # kill 68945 00:06:26.156 14:46:58 -- common/autotest_common.sh@960 -- # wait 68945 00:06:26.156 spdk_app_start is called in Round 0. 00:06:26.156 Shutdown signal received, stop current app iteration 00:06:26.156 Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 reinitialization... 00:06:26.156 spdk_app_start is called in Round 1. 00:06:26.156 Shutdown signal received, stop current app iteration 00:06:26.156 Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 reinitialization... 00:06:26.156 spdk_app_start is called in Round 2. 00:06:26.156 Shutdown signal received, stop current app iteration 00:06:26.156 Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 reinitialization... 00:06:26.156 spdk_app_start is called in Round 3. 00:06:26.156 Shutdown signal received, stop current app iteration 00:06:26.156 14:46:59 -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:06:26.156 14:46:59 -- event/event.sh@42 -- # return 0 00:06:26.156 00:06:26.156 real 0m18.803s 00:06:26.156 user 0m42.538s 00:06:26.156 sys 0m2.774s 00:06:26.156 14:46:59 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:26.156 ************************************ 00:06:26.156 END TEST app_repeat 00:06:26.156 ************************************ 00:06:26.156 14:46:59 -- common/autotest_common.sh@10 -- # set +x 00:06:26.156 14:46:59 -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:06:26.156 14:46:59 -- event/event.sh@55 -- # run_test cpu_locks /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:06:26.157 14:46:59 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:26.157 14:46:59 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:26.157 14:46:59 -- common/autotest_common.sh@10 -- # set +x 00:06:26.157 ************************************ 00:06:26.157 START TEST cpu_locks 00:06:26.157 ************************************ 00:06:26.157 14:46:59 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:06:26.157 * Looking for test storage... 00:06:26.157 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:06:26.157 14:46:59 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:06:26.157 14:46:59 -- common/autotest_common.sh@1690 -- # lcov --version 00:06:26.157 14:46:59 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:06:26.416 14:46:59 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:06:26.416 14:46:59 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:06:26.416 14:46:59 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:06:26.416 14:46:59 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:06:26.416 14:46:59 -- scripts/common.sh@335 -- # IFS=.-: 00:06:26.416 14:46:59 -- scripts/common.sh@335 -- # read -ra ver1 00:06:26.416 14:46:59 -- scripts/common.sh@336 -- # IFS=.-: 00:06:26.416 14:46:59 -- scripts/common.sh@336 -- # read -ra ver2 00:06:26.416 14:46:59 -- scripts/common.sh@337 -- # local 'op=<' 00:06:26.416 14:46:59 -- scripts/common.sh@339 -- # ver1_l=2 00:06:26.416 14:46:59 -- scripts/common.sh@340 -- # ver2_l=1 00:06:26.416 14:46:59 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:06:26.416 14:46:59 -- scripts/common.sh@343 -- # case "$op" in 00:06:26.416 14:46:59 -- scripts/common.sh@344 -- # : 1 00:06:26.416 14:46:59 -- scripts/common.sh@363 -- # (( v = 0 )) 00:06:26.416 14:46:59 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:26.416 14:46:59 -- scripts/common.sh@364 -- # decimal 1 00:06:26.416 14:46:59 -- scripts/common.sh@352 -- # local d=1 00:06:26.416 14:46:59 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:26.416 14:46:59 -- scripts/common.sh@354 -- # echo 1 00:06:26.416 14:46:59 -- scripts/common.sh@364 -- # ver1[v]=1 00:06:26.416 14:46:59 -- scripts/common.sh@365 -- # decimal 2 00:06:26.416 14:46:59 -- scripts/common.sh@352 -- # local d=2 00:06:26.416 14:46:59 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:26.416 14:46:59 -- scripts/common.sh@354 -- # echo 2 00:06:26.416 14:46:59 -- scripts/common.sh@365 -- # ver2[v]=2 00:06:26.416 14:46:59 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:06:26.416 14:46:59 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:06:26.416 14:46:59 -- scripts/common.sh@367 -- # return 0 00:06:26.416 14:46:59 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:26.416 14:46:59 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:06:26.416 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:26.416 --rc genhtml_branch_coverage=1 00:06:26.416 --rc genhtml_function_coverage=1 00:06:26.416 --rc genhtml_legend=1 00:06:26.416 --rc geninfo_all_blocks=1 00:06:26.416 --rc geninfo_unexecuted_blocks=1 00:06:26.416 00:06:26.416 ' 00:06:26.416 14:46:59 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:06:26.416 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:26.416 --rc genhtml_branch_coverage=1 00:06:26.416 --rc genhtml_function_coverage=1 00:06:26.416 --rc genhtml_legend=1 00:06:26.416 --rc geninfo_all_blocks=1 00:06:26.416 --rc geninfo_unexecuted_blocks=1 00:06:26.416 00:06:26.416 ' 00:06:26.416 14:46:59 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:06:26.416 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:26.416 --rc genhtml_branch_coverage=1 00:06:26.416 --rc genhtml_function_coverage=1 00:06:26.416 --rc genhtml_legend=1 00:06:26.416 --rc geninfo_all_blocks=1 00:06:26.416 --rc geninfo_unexecuted_blocks=1 00:06:26.416 00:06:26.416 ' 00:06:26.416 14:46:59 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:06:26.416 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:26.416 --rc genhtml_branch_coverage=1 00:06:26.417 --rc genhtml_function_coverage=1 00:06:26.417 --rc genhtml_legend=1 00:06:26.417 --rc geninfo_all_blocks=1 00:06:26.417 --rc geninfo_unexecuted_blocks=1 00:06:26.417 00:06:26.417 ' 00:06:26.417 14:46:59 -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:06:26.417 14:46:59 -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:06:26.417 14:46:59 -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:06:26.417 14:46:59 -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:06:26.417 14:46:59 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:26.417 14:46:59 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:26.417 14:46:59 -- common/autotest_common.sh@10 -- # set +x 00:06:26.417 ************************************ 00:06:26.417 START TEST default_locks 00:06:26.417 ************************************ 00:06:26.417 14:46:59 -- common/autotest_common.sh@1114 -- # default_locks 00:06:26.417 14:46:59 -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=69571 00:06:26.417 14:46:59 -- event/cpu_locks.sh@47 -- # waitforlisten 69571 00:06:26.417 14:46:59 -- event/cpu_locks.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:26.417 14:46:59 -- common/autotest_common.sh@829 -- # '[' -z 69571 ']' 00:06:26.417 14:46:59 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:26.417 14:46:59 -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:26.417 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:26.417 14:46:59 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:26.417 14:46:59 -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:26.417 14:46:59 -- common/autotest_common.sh@10 -- # set +x 00:06:26.417 [2024-12-01 14:46:59.436328] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:26.417 [2024-12-01 14:46:59.436483] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69571 ] 00:06:26.676 [2024-12-01 14:46:59.583959] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:26.676 [2024-12-01 14:46:59.633420] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:26.676 [2024-12-01 14:46:59.633565] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:27.613 14:47:00 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:27.613 14:47:00 -- common/autotest_common.sh@862 -- # return 0 00:06:27.613 14:47:00 -- event/cpu_locks.sh@49 -- # locks_exist 69571 00:06:27.613 14:47:00 -- event/cpu_locks.sh@22 -- # lslocks -p 69571 00:06:27.613 14:47:00 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:27.613 14:47:00 -- event/cpu_locks.sh@50 -- # killprocess 69571 00:06:27.613 14:47:00 -- common/autotest_common.sh@936 -- # '[' -z 69571 ']' 00:06:27.613 14:47:00 -- common/autotest_common.sh@940 -- # kill -0 69571 00:06:27.613 14:47:00 -- common/autotest_common.sh@941 -- # uname 00:06:27.613 14:47:00 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:06:27.613 14:47:00 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 69571 00:06:27.613 14:47:00 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:06:27.613 killing process with pid 69571 00:06:27.613 14:47:00 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:06:27.613 14:47:00 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 69571' 00:06:27.613 14:47:00 -- common/autotest_common.sh@955 -- # kill 69571 00:06:27.613 14:47:00 -- common/autotest_common.sh@960 -- # wait 69571 00:06:28.182 14:47:01 -- event/cpu_locks.sh@52 -- # NOT waitforlisten 69571 00:06:28.182 14:47:01 -- common/autotest_common.sh@650 -- # local es=0 00:06:28.182 14:47:01 -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 69571 00:06:28.182 14:47:01 -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:06:28.182 14:47:01 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:28.182 14:47:01 -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:06:28.182 14:47:01 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:28.182 14:47:01 -- common/autotest_common.sh@653 -- # waitforlisten 69571 00:06:28.182 14:47:01 -- common/autotest_common.sh@829 -- # '[' -z 69571 ']' 00:06:28.182 14:47:01 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:28.182 14:47:01 -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:28.182 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:28.182 14:47:01 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:28.182 14:47:01 -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:28.182 14:47:01 -- common/autotest_common.sh@10 -- # set +x 00:06:28.182 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 844: kill: (69571) - No such process 00:06:28.182 ERROR: process (pid: 69571) is no longer running 00:06:28.182 14:47:01 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:28.182 14:47:01 -- common/autotest_common.sh@862 -- # return 1 00:06:28.182 14:47:01 -- common/autotest_common.sh@653 -- # es=1 00:06:28.182 14:47:01 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:28.182 14:47:01 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:28.182 14:47:01 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:28.182 14:47:01 -- event/cpu_locks.sh@54 -- # no_locks 00:06:28.182 14:47:01 -- event/cpu_locks.sh@26 -- # lock_files=() 00:06:28.182 14:47:01 -- event/cpu_locks.sh@26 -- # local lock_files 00:06:28.182 14:47:01 -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:06:28.182 00:06:28.182 real 0m1.706s 00:06:28.182 user 0m1.824s 00:06:28.182 sys 0m0.534s 00:06:28.182 ************************************ 00:06:28.182 END TEST default_locks 00:06:28.182 ************************************ 00:06:28.182 14:47:01 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:28.182 14:47:01 -- common/autotest_common.sh@10 -- # set +x 00:06:28.182 14:47:01 -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:06:28.182 14:47:01 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:28.182 14:47:01 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:28.182 14:47:01 -- common/autotest_common.sh@10 -- # set +x 00:06:28.182 ************************************ 00:06:28.182 START TEST default_locks_via_rpc 00:06:28.182 ************************************ 00:06:28.182 14:47:01 -- common/autotest_common.sh@1114 -- # default_locks_via_rpc 00:06:28.182 14:47:01 -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=69635 00:06:28.182 14:47:01 -- event/cpu_locks.sh@63 -- # waitforlisten 69635 00:06:28.182 14:47:01 -- event/cpu_locks.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:28.182 14:47:01 -- common/autotest_common.sh@829 -- # '[' -z 69635 ']' 00:06:28.182 14:47:01 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:28.182 14:47:01 -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:28.182 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:28.182 14:47:01 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:28.182 14:47:01 -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:28.182 14:47:01 -- common/autotest_common.sh@10 -- # set +x 00:06:28.182 [2024-12-01 14:47:01.150032] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:28.182 [2024-12-01 14:47:01.150127] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69635 ] 00:06:28.182 [2024-12-01 14:47:01.282525] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:28.442 [2024-12-01 14:47:01.333795] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:28.442 [2024-12-01 14:47:01.333942] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:29.379 14:47:02 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:29.379 14:47:02 -- common/autotest_common.sh@862 -- # return 0 00:06:29.379 14:47:02 -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:06:29.379 14:47:02 -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:29.379 14:47:02 -- common/autotest_common.sh@10 -- # set +x 00:06:29.379 14:47:02 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:29.379 14:47:02 -- event/cpu_locks.sh@67 -- # no_locks 00:06:29.379 14:47:02 -- event/cpu_locks.sh@26 -- # lock_files=() 00:06:29.379 14:47:02 -- event/cpu_locks.sh@26 -- # local lock_files 00:06:29.379 14:47:02 -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:06:29.379 14:47:02 -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:06:29.379 14:47:02 -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:29.379 14:47:02 -- common/autotest_common.sh@10 -- # set +x 00:06:29.379 14:47:02 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:29.379 14:47:02 -- event/cpu_locks.sh@71 -- # locks_exist 69635 00:06:29.379 14:47:02 -- event/cpu_locks.sh@22 -- # lslocks -p 69635 00:06:29.379 14:47:02 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:29.639 14:47:02 -- event/cpu_locks.sh@73 -- # killprocess 69635 00:06:29.639 14:47:02 -- common/autotest_common.sh@936 -- # '[' -z 69635 ']' 00:06:29.639 14:47:02 -- common/autotest_common.sh@940 -- # kill -0 69635 00:06:29.639 14:47:02 -- common/autotest_common.sh@941 -- # uname 00:06:29.639 14:47:02 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:06:29.639 14:47:02 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 69635 00:06:29.639 killing process with pid 69635 00:06:29.639 14:47:02 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:06:29.639 14:47:02 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:06:29.639 14:47:02 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 69635' 00:06:29.639 14:47:02 -- common/autotest_common.sh@955 -- # kill 69635 00:06:29.639 14:47:02 -- common/autotest_common.sh@960 -- # wait 69635 00:06:30.208 00:06:30.208 real 0m1.918s 00:06:30.208 user 0m2.129s 00:06:30.208 sys 0m0.558s 00:06:30.208 14:47:03 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:30.208 ************************************ 00:06:30.208 14:47:03 -- common/autotest_common.sh@10 -- # set +x 00:06:30.208 END TEST default_locks_via_rpc 00:06:30.208 ************************************ 00:06:30.208 14:47:03 -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:06:30.208 14:47:03 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:30.208 14:47:03 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:30.208 14:47:03 -- common/autotest_common.sh@10 -- # set +x 00:06:30.208 ************************************ 00:06:30.208 START TEST non_locking_app_on_locked_coremask 00:06:30.208 ************************************ 00:06:30.208 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:30.208 14:47:03 -- common/autotest_common.sh@1114 -- # non_locking_app_on_locked_coremask 00:06:30.208 14:47:03 -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=69700 00:06:30.208 14:47:03 -- event/cpu_locks.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:30.208 14:47:03 -- event/cpu_locks.sh@81 -- # waitforlisten 69700 /var/tmp/spdk.sock 00:06:30.208 14:47:03 -- common/autotest_common.sh@829 -- # '[' -z 69700 ']' 00:06:30.208 14:47:03 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:30.208 14:47:03 -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:30.208 14:47:03 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:30.208 14:47:03 -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:30.208 14:47:03 -- common/autotest_common.sh@10 -- # set +x 00:06:30.208 [2024-12-01 14:47:03.123395] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:30.208 [2024-12-01 14:47:03.123607] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69700 ] 00:06:30.208 [2024-12-01 14:47:03.253985] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:30.208 [2024-12-01 14:47:03.303593] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:30.208 [2024-12-01 14:47:03.304025] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:31.145 14:47:04 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:31.145 14:47:04 -- common/autotest_common.sh@862 -- # return 0 00:06:31.145 14:47:04 -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=69727 00:06:31.145 14:47:04 -- event/cpu_locks.sh@83 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:06:31.145 14:47:04 -- event/cpu_locks.sh@85 -- # waitforlisten 69727 /var/tmp/spdk2.sock 00:06:31.145 14:47:04 -- common/autotest_common.sh@829 -- # '[' -z 69727 ']' 00:06:31.145 14:47:04 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:31.145 14:47:04 -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:31.145 14:47:04 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:31.145 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:31.145 14:47:04 -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:31.145 14:47:04 -- common/autotest_common.sh@10 -- # set +x 00:06:31.145 [2024-12-01 14:47:04.167341] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:31.145 [2024-12-01 14:47:04.167428] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69727 ] 00:06:31.403 [2024-12-01 14:47:04.309445] app.c: 795:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:31.403 [2024-12-01 14:47:04.309480] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:31.403 [2024-12-01 14:47:04.414605] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:31.403 [2024-12-01 14:47:04.414747] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:31.970 14:47:05 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:31.970 14:47:05 -- common/autotest_common.sh@862 -- # return 0 00:06:31.970 14:47:05 -- event/cpu_locks.sh@87 -- # locks_exist 69700 00:06:31.970 14:47:05 -- event/cpu_locks.sh@22 -- # lslocks -p 69700 00:06:31.970 14:47:05 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:32.903 14:47:05 -- event/cpu_locks.sh@89 -- # killprocess 69700 00:06:32.903 14:47:05 -- common/autotest_common.sh@936 -- # '[' -z 69700 ']' 00:06:32.903 14:47:05 -- common/autotest_common.sh@940 -- # kill -0 69700 00:06:32.903 14:47:05 -- common/autotest_common.sh@941 -- # uname 00:06:32.903 14:47:05 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:06:32.903 14:47:05 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 69700 00:06:32.903 killing process with pid 69700 00:06:32.903 14:47:05 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:06:32.903 14:47:05 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:06:32.903 14:47:05 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 69700' 00:06:32.903 14:47:05 -- common/autotest_common.sh@955 -- # kill 69700 00:06:32.903 14:47:05 -- common/autotest_common.sh@960 -- # wait 69700 00:06:33.470 14:47:06 -- event/cpu_locks.sh@90 -- # killprocess 69727 00:06:33.470 14:47:06 -- common/autotest_common.sh@936 -- # '[' -z 69727 ']' 00:06:33.470 14:47:06 -- common/autotest_common.sh@940 -- # kill -0 69727 00:06:33.470 14:47:06 -- common/autotest_common.sh@941 -- # uname 00:06:33.470 14:47:06 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:06:33.729 14:47:06 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 69727 00:06:33.729 killing process with pid 69727 00:06:33.729 14:47:06 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:06:33.729 14:47:06 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:06:33.729 14:47:06 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 69727' 00:06:33.729 14:47:06 -- common/autotest_common.sh@955 -- # kill 69727 00:06:33.729 14:47:06 -- common/autotest_common.sh@960 -- # wait 69727 00:06:33.988 00:06:33.988 real 0m3.873s 00:06:33.988 user 0m4.307s 00:06:33.988 sys 0m1.102s 00:06:33.988 14:47:06 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:33.988 ************************************ 00:06:33.988 END TEST non_locking_app_on_locked_coremask 00:06:33.988 ************************************ 00:06:33.988 14:47:06 -- common/autotest_common.sh@10 -- # set +x 00:06:33.988 14:47:06 -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:06:33.988 14:47:06 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:33.988 14:47:06 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:33.988 14:47:06 -- common/autotest_common.sh@10 -- # set +x 00:06:33.988 ************************************ 00:06:33.988 START TEST locking_app_on_unlocked_coremask 00:06:33.988 ************************************ 00:06:33.988 14:47:06 -- common/autotest_common.sh@1114 -- # locking_app_on_unlocked_coremask 00:06:33.988 14:47:07 -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=69806 00:06:33.988 14:47:07 -- event/cpu_locks.sh@99 -- # waitforlisten 69806 /var/tmp/spdk.sock 00:06:33.988 14:47:07 -- common/autotest_common.sh@829 -- # '[' -z 69806 ']' 00:06:33.988 14:47:07 -- event/cpu_locks.sh@97 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:06:33.988 14:47:07 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:33.988 14:47:07 -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:33.988 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:33.988 14:47:07 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:33.988 14:47:07 -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:33.988 14:47:07 -- common/autotest_common.sh@10 -- # set +x 00:06:33.988 [2024-12-01 14:47:07.066217] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:33.988 [2024-12-01 14:47:07.066330] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69806 ] 00:06:34.246 [2024-12-01 14:47:07.205718] app.c: 795:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:34.246 [2024-12-01 14:47:07.205928] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:34.247 [2024-12-01 14:47:07.264760] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:34.247 [2024-12-01 14:47:07.264903] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:35.183 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:35.183 14:47:08 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:35.183 14:47:08 -- common/autotest_common.sh@862 -- # return 0 00:06:35.183 14:47:08 -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=69834 00:06:35.183 14:47:08 -- event/cpu_locks.sh@101 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:06:35.183 14:47:08 -- event/cpu_locks.sh@103 -- # waitforlisten 69834 /var/tmp/spdk2.sock 00:06:35.183 14:47:08 -- common/autotest_common.sh@829 -- # '[' -z 69834 ']' 00:06:35.183 14:47:08 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:35.183 14:47:08 -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:35.183 14:47:08 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:35.183 14:47:08 -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:35.183 14:47:08 -- common/autotest_common.sh@10 -- # set +x 00:06:35.183 [2024-12-01 14:47:08.121589] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:35.183 [2024-12-01 14:47:08.121907] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69834 ] 00:06:35.183 [2024-12-01 14:47:08.263454] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:35.442 [2024-12-01 14:47:08.378703] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:35.442 [2024-12-01 14:47:08.378856] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:36.010 14:47:09 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:36.010 14:47:09 -- common/autotest_common.sh@862 -- # return 0 00:06:36.010 14:47:09 -- event/cpu_locks.sh@105 -- # locks_exist 69834 00:06:36.010 14:47:09 -- event/cpu_locks.sh@22 -- # lslocks -p 69834 00:06:36.010 14:47:09 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:36.946 14:47:09 -- event/cpu_locks.sh@107 -- # killprocess 69806 00:06:36.946 14:47:09 -- common/autotest_common.sh@936 -- # '[' -z 69806 ']' 00:06:36.946 14:47:09 -- common/autotest_common.sh@940 -- # kill -0 69806 00:06:36.946 14:47:09 -- common/autotest_common.sh@941 -- # uname 00:06:36.946 14:47:09 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:06:36.946 14:47:09 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 69806 00:06:36.946 killing process with pid 69806 00:06:36.946 14:47:09 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:06:36.947 14:47:09 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:06:36.947 14:47:09 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 69806' 00:06:36.947 14:47:09 -- common/autotest_common.sh@955 -- # kill 69806 00:06:36.947 14:47:09 -- common/autotest_common.sh@960 -- # wait 69806 00:06:37.515 14:47:10 -- event/cpu_locks.sh@108 -- # killprocess 69834 00:06:37.515 14:47:10 -- common/autotest_common.sh@936 -- # '[' -z 69834 ']' 00:06:37.515 14:47:10 -- common/autotest_common.sh@940 -- # kill -0 69834 00:06:37.515 14:47:10 -- common/autotest_common.sh@941 -- # uname 00:06:37.515 14:47:10 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:06:37.515 14:47:10 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 69834 00:06:37.515 killing process with pid 69834 00:06:37.515 14:47:10 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:06:37.515 14:47:10 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:06:37.515 14:47:10 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 69834' 00:06:37.515 14:47:10 -- common/autotest_common.sh@955 -- # kill 69834 00:06:37.515 14:47:10 -- common/autotest_common.sh@960 -- # wait 69834 00:06:38.083 ************************************ 00:06:38.083 END TEST locking_app_on_unlocked_coremask 00:06:38.083 ************************************ 00:06:38.083 00:06:38.083 real 0m3.918s 00:06:38.083 user 0m4.378s 00:06:38.083 sys 0m1.114s 00:06:38.083 14:47:10 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:38.083 14:47:10 -- common/autotest_common.sh@10 -- # set +x 00:06:38.083 14:47:10 -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:06:38.083 14:47:10 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:38.083 14:47:10 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:38.083 14:47:10 -- common/autotest_common.sh@10 -- # set +x 00:06:38.083 ************************************ 00:06:38.083 START TEST locking_app_on_locked_coremask 00:06:38.083 ************************************ 00:06:38.083 14:47:10 -- common/autotest_common.sh@1114 -- # locking_app_on_locked_coremask 00:06:38.083 14:47:10 -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=69913 00:06:38.083 14:47:10 -- event/cpu_locks.sh@116 -- # waitforlisten 69913 /var/tmp/spdk.sock 00:06:38.083 14:47:10 -- event/cpu_locks.sh@114 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:38.084 14:47:10 -- common/autotest_common.sh@829 -- # '[' -z 69913 ']' 00:06:38.084 14:47:10 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:38.084 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:38.084 14:47:10 -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:38.084 14:47:10 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:38.084 14:47:10 -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:38.084 14:47:10 -- common/autotest_common.sh@10 -- # set +x 00:06:38.084 [2024-12-01 14:47:11.044546] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:38.084 [2024-12-01 14:47:11.044858] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69913 ] 00:06:38.084 [2024-12-01 14:47:11.182633] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:38.343 [2024-12-01 14:47:11.231700] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:38.343 [2024-12-01 14:47:11.231860] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:38.912 14:47:12 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:38.912 14:47:12 -- common/autotest_common.sh@862 -- # return 0 00:06:38.912 14:47:12 -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=69941 00:06:38.912 14:47:12 -- event/cpu_locks.sh@120 -- # NOT waitforlisten 69941 /var/tmp/spdk2.sock 00:06:38.912 14:47:12 -- event/cpu_locks.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:06:38.912 14:47:12 -- common/autotest_common.sh@650 -- # local es=0 00:06:38.912 14:47:12 -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 69941 /var/tmp/spdk2.sock 00:06:38.912 14:47:12 -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:06:38.912 14:47:12 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:38.912 14:47:12 -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:06:38.912 14:47:12 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:38.912 14:47:12 -- common/autotest_common.sh@653 -- # waitforlisten 69941 /var/tmp/spdk2.sock 00:06:38.912 14:47:12 -- common/autotest_common.sh@829 -- # '[' -z 69941 ']' 00:06:38.912 14:47:12 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:38.912 14:47:12 -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:38.912 14:47:12 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:38.912 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:38.912 14:47:12 -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:38.912 14:47:12 -- common/autotest_common.sh@10 -- # set +x 00:06:39.171 [2024-12-01 14:47:12.073422] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:39.171 [2024-12-01 14:47:12.074393] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69941 ] 00:06:39.171 [2024-12-01 14:47:12.218693] app.c: 665:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 69913 has claimed it. 00:06:39.171 [2024-12-01 14:47:12.218737] app.c: 791:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:06:39.739 ERROR: process (pid: 69941) is no longer running 00:06:39.739 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 844: kill: (69941) - No such process 00:06:39.739 14:47:12 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:39.739 14:47:12 -- common/autotest_common.sh@862 -- # return 1 00:06:39.739 14:47:12 -- common/autotest_common.sh@653 -- # es=1 00:06:39.739 14:47:12 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:39.739 14:47:12 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:39.739 14:47:12 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:39.739 14:47:12 -- event/cpu_locks.sh@122 -- # locks_exist 69913 00:06:39.739 14:47:12 -- event/cpu_locks.sh@22 -- # lslocks -p 69913 00:06:39.739 14:47:12 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:40.307 14:47:13 -- event/cpu_locks.sh@124 -- # killprocess 69913 00:06:40.307 14:47:13 -- common/autotest_common.sh@936 -- # '[' -z 69913 ']' 00:06:40.307 14:47:13 -- common/autotest_common.sh@940 -- # kill -0 69913 00:06:40.307 14:47:13 -- common/autotest_common.sh@941 -- # uname 00:06:40.307 14:47:13 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:06:40.307 14:47:13 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 69913 00:06:40.307 killing process with pid 69913 00:06:40.307 14:47:13 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:06:40.307 14:47:13 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:06:40.307 14:47:13 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 69913' 00:06:40.307 14:47:13 -- common/autotest_common.sh@955 -- # kill 69913 00:06:40.307 14:47:13 -- common/autotest_common.sh@960 -- # wait 69913 00:06:40.566 00:06:40.566 real 0m2.523s 00:06:40.566 user 0m2.887s 00:06:40.566 sys 0m0.639s 00:06:40.566 14:47:13 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:40.566 ************************************ 00:06:40.566 14:47:13 -- common/autotest_common.sh@10 -- # set +x 00:06:40.566 END TEST locking_app_on_locked_coremask 00:06:40.566 ************************************ 00:06:40.566 14:47:13 -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:06:40.566 14:47:13 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:40.566 14:47:13 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:40.566 14:47:13 -- common/autotest_common.sh@10 -- # set +x 00:06:40.566 ************************************ 00:06:40.566 START TEST locking_overlapped_coremask 00:06:40.566 ************************************ 00:06:40.566 14:47:13 -- common/autotest_common.sh@1114 -- # locking_overlapped_coremask 00:06:40.566 14:47:13 -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=69992 00:06:40.566 14:47:13 -- event/cpu_locks.sh@133 -- # waitforlisten 69992 /var/tmp/spdk.sock 00:06:40.566 14:47:13 -- event/cpu_locks.sh@131 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:06:40.566 14:47:13 -- common/autotest_common.sh@829 -- # '[' -z 69992 ']' 00:06:40.566 14:47:13 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:40.566 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:40.566 14:47:13 -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:40.566 14:47:13 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:40.566 14:47:13 -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:40.566 14:47:13 -- common/autotest_common.sh@10 -- # set +x 00:06:40.566 [2024-12-01 14:47:13.612806] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:40.566 [2024-12-01 14:47:13.613118] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69992 ] 00:06:40.825 [2024-12-01 14:47:13.750643] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:40.825 [2024-12-01 14:47:13.807132] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:40.825 [2024-12-01 14:47:13.807667] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:40.825 [2024-12-01 14:47:13.807747] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:40.825 [2024-12-01 14:47:13.807782] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:41.764 14:47:14 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:41.764 14:47:14 -- common/autotest_common.sh@862 -- # return 0 00:06:41.764 14:47:14 -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=70021 00:06:41.764 14:47:14 -- event/cpu_locks.sh@135 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:06:41.764 14:47:14 -- event/cpu_locks.sh@137 -- # NOT waitforlisten 70021 /var/tmp/spdk2.sock 00:06:41.764 14:47:14 -- common/autotest_common.sh@650 -- # local es=0 00:06:41.764 14:47:14 -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 70021 /var/tmp/spdk2.sock 00:06:41.764 14:47:14 -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:06:41.764 14:47:14 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:41.764 14:47:14 -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:06:41.764 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:41.764 14:47:14 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:41.764 14:47:14 -- common/autotest_common.sh@653 -- # waitforlisten 70021 /var/tmp/spdk2.sock 00:06:41.764 14:47:14 -- common/autotest_common.sh@829 -- # '[' -z 70021 ']' 00:06:41.764 14:47:14 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:41.764 14:47:14 -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:41.764 14:47:14 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:41.764 14:47:14 -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:41.764 14:47:14 -- common/autotest_common.sh@10 -- # set +x 00:06:41.764 [2024-12-01 14:47:14.636602] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:41.764 [2024-12-01 14:47:14.636732] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70021 ] 00:06:41.764 [2024-12-01 14:47:14.780166] app.c: 665:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 69992 has claimed it. 00:06:41.764 [2024-12-01 14:47:14.780228] app.c: 791:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:06:42.365 ERROR: process (pid: 70021) is no longer running 00:06:42.366 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 844: kill: (70021) - No such process 00:06:42.366 14:47:15 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:42.366 14:47:15 -- common/autotest_common.sh@862 -- # return 1 00:06:42.366 14:47:15 -- common/autotest_common.sh@653 -- # es=1 00:06:42.366 14:47:15 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:42.366 14:47:15 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:42.366 14:47:15 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:42.366 14:47:15 -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:06:42.366 14:47:15 -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:42.366 14:47:15 -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:42.366 14:47:15 -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:42.366 14:47:15 -- event/cpu_locks.sh@141 -- # killprocess 69992 00:06:42.366 14:47:15 -- common/autotest_common.sh@936 -- # '[' -z 69992 ']' 00:06:42.366 14:47:15 -- common/autotest_common.sh@940 -- # kill -0 69992 00:06:42.366 14:47:15 -- common/autotest_common.sh@941 -- # uname 00:06:42.366 14:47:15 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:06:42.366 14:47:15 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 69992 00:06:42.366 killing process with pid 69992 00:06:42.366 14:47:15 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:06:42.366 14:47:15 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:06:42.366 14:47:15 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 69992' 00:06:42.366 14:47:15 -- common/autotest_common.sh@955 -- # kill 69992 00:06:42.366 14:47:15 -- common/autotest_common.sh@960 -- # wait 69992 00:06:42.727 ************************************ 00:06:42.727 END TEST locking_overlapped_coremask 00:06:42.727 ************************************ 00:06:42.727 00:06:42.727 real 0m2.215s 00:06:42.727 user 0m6.360s 00:06:42.727 sys 0m0.431s 00:06:42.727 14:47:15 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:42.727 14:47:15 -- common/autotest_common.sh@10 -- # set +x 00:06:42.727 14:47:15 -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:06:42.727 14:47:15 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:42.727 14:47:15 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:42.727 14:47:15 -- common/autotest_common.sh@10 -- # set +x 00:06:42.727 ************************************ 00:06:42.727 START TEST locking_overlapped_coremask_via_rpc 00:06:42.727 ************************************ 00:06:42.727 14:47:15 -- common/autotest_common.sh@1114 -- # locking_overlapped_coremask_via_rpc 00:06:42.727 14:47:15 -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=70074 00:06:42.727 14:47:15 -- event/cpu_locks.sh@149 -- # waitforlisten 70074 /var/tmp/spdk.sock 00:06:42.727 14:47:15 -- event/cpu_locks.sh@147 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:06:42.727 14:47:15 -- common/autotest_common.sh@829 -- # '[' -z 70074 ']' 00:06:42.727 14:47:15 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:42.727 14:47:15 -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:42.727 14:47:15 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:42.727 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:42.727 14:47:15 -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:42.727 14:47:15 -- common/autotest_common.sh@10 -- # set +x 00:06:42.986 [2024-12-01 14:47:15.870608] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:42.986 [2024-12-01 14:47:15.870702] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70074 ] 00:06:42.986 [2024-12-01 14:47:16.003959] app.c: 795:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:42.986 [2024-12-01 14:47:16.004020] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:42.986 [2024-12-01 14:47:16.055901] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:42.986 [2024-12-01 14:47:16.056169] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:42.986 [2024-12-01 14:47:16.056296] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:42.986 [2024-12-01 14:47:16.056321] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:43.923 14:47:16 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:43.923 14:47:16 -- common/autotest_common.sh@862 -- # return 0 00:06:43.923 14:47:16 -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=70104 00:06:43.923 14:47:16 -- event/cpu_locks.sh@151 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:06:43.923 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:43.923 14:47:16 -- event/cpu_locks.sh@153 -- # waitforlisten 70104 /var/tmp/spdk2.sock 00:06:43.923 14:47:16 -- common/autotest_common.sh@829 -- # '[' -z 70104 ']' 00:06:43.923 14:47:16 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:43.923 14:47:16 -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:43.923 14:47:16 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:43.923 14:47:16 -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:43.923 14:47:16 -- common/autotest_common.sh@10 -- # set +x 00:06:43.923 [2024-12-01 14:47:16.866570] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:43.923 [2024-12-01 14:47:16.866687] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70104 ] 00:06:43.923 [2024-12-01 14:47:17.008035] app.c: 795:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:43.923 [2024-12-01 14:47:17.008086] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:44.182 [2024-12-01 14:47:17.160904] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:44.182 [2024-12-01 14:47:17.161928] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:06:44.182 [2024-12-01 14:47:17.164877] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:44.182 [2024-12-01 14:47:17.164879] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:06:45.120 14:47:17 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:45.120 14:47:17 -- common/autotest_common.sh@862 -- # return 0 00:06:45.120 14:47:17 -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:06:45.120 14:47:17 -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:45.120 14:47:17 -- common/autotest_common.sh@10 -- # set +x 00:06:45.120 14:47:17 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:45.120 14:47:17 -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:45.120 14:47:17 -- common/autotest_common.sh@650 -- # local es=0 00:06:45.120 14:47:17 -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:45.120 14:47:17 -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:06:45.120 14:47:17 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:45.120 14:47:17 -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:06:45.120 14:47:17 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:45.120 14:47:17 -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:45.120 14:47:17 -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:45.120 14:47:17 -- common/autotest_common.sh@10 -- # set +x 00:06:45.120 [2024-12-01 14:47:17.881913] app.c: 665:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 70074 has claimed it. 00:06:45.120 2024/12/01 14:47:17 error on JSON-RPC call, method: framework_enable_cpumask_locks, params: map[], err: error received for framework_enable_cpumask_locks method, err: Code=-32603 Msg=Failed to claim CPU core: 2 00:06:45.120 request: 00:06:45.120 { 00:06:45.120 "method": "framework_enable_cpumask_locks", 00:06:45.120 "params": {} 00:06:45.120 } 00:06:45.120 Got JSON-RPC error response 00:06:45.120 GoRPCClient: error on JSON-RPC call 00:06:45.120 14:47:17 -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:06:45.120 14:47:17 -- common/autotest_common.sh@653 -- # es=1 00:06:45.120 14:47:17 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:45.120 14:47:17 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:45.120 14:47:17 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:45.120 14:47:17 -- event/cpu_locks.sh@158 -- # waitforlisten 70074 /var/tmp/spdk.sock 00:06:45.120 14:47:17 -- common/autotest_common.sh@829 -- # '[' -z 70074 ']' 00:06:45.120 14:47:17 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:45.120 14:47:17 -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:45.120 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:45.120 14:47:17 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:45.120 14:47:17 -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:45.120 14:47:17 -- common/autotest_common.sh@10 -- # set +x 00:06:45.120 14:47:18 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:45.120 14:47:18 -- common/autotest_common.sh@862 -- # return 0 00:06:45.120 14:47:18 -- event/cpu_locks.sh@159 -- # waitforlisten 70104 /var/tmp/spdk2.sock 00:06:45.120 14:47:18 -- common/autotest_common.sh@829 -- # '[' -z 70104 ']' 00:06:45.120 14:47:18 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:45.120 14:47:18 -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:45.120 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:45.120 14:47:18 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:45.120 14:47:18 -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:45.120 14:47:18 -- common/autotest_common.sh@10 -- # set +x 00:06:45.380 14:47:18 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:45.380 14:47:18 -- common/autotest_common.sh@862 -- # return 0 00:06:45.380 14:47:18 -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:06:45.380 14:47:18 -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:45.380 14:47:18 -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:45.380 14:47:18 -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:45.380 00:06:45.380 real 0m2.556s 00:06:45.380 user 0m1.279s 00:06:45.380 sys 0m0.224s 00:06:45.380 14:47:18 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:45.380 14:47:18 -- common/autotest_common.sh@10 -- # set +x 00:06:45.380 ************************************ 00:06:45.380 END TEST locking_overlapped_coremask_via_rpc 00:06:45.380 ************************************ 00:06:45.380 14:47:18 -- event/cpu_locks.sh@174 -- # cleanup 00:06:45.380 14:47:18 -- event/cpu_locks.sh@15 -- # [[ -z 70074 ]] 00:06:45.380 14:47:18 -- event/cpu_locks.sh@15 -- # killprocess 70074 00:06:45.380 14:47:18 -- common/autotest_common.sh@936 -- # '[' -z 70074 ']' 00:06:45.380 14:47:18 -- common/autotest_common.sh@940 -- # kill -0 70074 00:06:45.380 14:47:18 -- common/autotest_common.sh@941 -- # uname 00:06:45.380 14:47:18 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:06:45.380 14:47:18 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 70074 00:06:45.380 14:47:18 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:06:45.380 14:47:18 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:06:45.380 killing process with pid 70074 00:06:45.380 14:47:18 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 70074' 00:06:45.380 14:47:18 -- common/autotest_common.sh@955 -- # kill 70074 00:06:45.380 14:47:18 -- common/autotest_common.sh@960 -- # wait 70074 00:06:45.949 14:47:18 -- event/cpu_locks.sh@16 -- # [[ -z 70104 ]] 00:06:45.949 14:47:18 -- event/cpu_locks.sh@16 -- # killprocess 70104 00:06:45.949 14:47:18 -- common/autotest_common.sh@936 -- # '[' -z 70104 ']' 00:06:45.949 14:47:18 -- common/autotest_common.sh@940 -- # kill -0 70104 00:06:45.949 14:47:18 -- common/autotest_common.sh@941 -- # uname 00:06:45.949 14:47:18 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:06:45.949 14:47:18 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 70104 00:06:45.949 14:47:18 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:06:45.949 14:47:18 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:06:45.949 killing process with pid 70104 00:06:45.949 14:47:18 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 70104' 00:06:45.949 14:47:18 -- common/autotest_common.sh@955 -- # kill 70104 00:06:45.949 14:47:18 -- common/autotest_common.sh@960 -- # wait 70104 00:06:46.517 14:47:19 -- event/cpu_locks.sh@18 -- # rm -f 00:06:46.517 14:47:19 -- event/cpu_locks.sh@1 -- # cleanup 00:06:46.517 14:47:19 -- event/cpu_locks.sh@15 -- # [[ -z 70074 ]] 00:06:46.517 14:47:19 -- event/cpu_locks.sh@15 -- # killprocess 70074 00:06:46.517 14:47:19 -- common/autotest_common.sh@936 -- # '[' -z 70074 ']' 00:06:46.517 14:47:19 -- common/autotest_common.sh@940 -- # kill -0 70074 00:06:46.517 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 940: kill: (70074) - No such process 00:06:46.517 14:47:19 -- common/autotest_common.sh@963 -- # echo 'Process with pid 70074 is not found' 00:06:46.517 Process with pid 70074 is not found 00:06:46.517 14:47:19 -- event/cpu_locks.sh@16 -- # [[ -z 70104 ]] 00:06:46.517 14:47:19 -- event/cpu_locks.sh@16 -- # killprocess 70104 00:06:46.517 14:47:19 -- common/autotest_common.sh@936 -- # '[' -z 70104 ']' 00:06:46.517 14:47:19 -- common/autotest_common.sh@940 -- # kill -0 70104 00:06:46.517 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 940: kill: (70104) - No such process 00:06:46.517 Process with pid 70104 is not found 00:06:46.517 14:47:19 -- common/autotest_common.sh@963 -- # echo 'Process with pid 70104 is not found' 00:06:46.517 14:47:19 -- event/cpu_locks.sh@18 -- # rm -f 00:06:46.517 00:06:46.517 real 0m20.196s 00:06:46.517 user 0m35.739s 00:06:46.517 sys 0m5.581s 00:06:46.517 14:47:19 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:46.517 ************************************ 00:06:46.517 14:47:19 -- common/autotest_common.sh@10 -- # set +x 00:06:46.517 END TEST cpu_locks 00:06:46.517 ************************************ 00:06:46.517 00:06:46.517 real 0m48.660s 00:06:46.517 user 1m35.772s 00:06:46.517 sys 0m9.251s 00:06:46.517 14:47:19 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:46.517 14:47:19 -- common/autotest_common.sh@10 -- # set +x 00:06:46.517 ************************************ 00:06:46.517 END TEST event 00:06:46.517 ************************************ 00:06:46.517 14:47:19 -- spdk/autotest.sh@175 -- # run_test thread /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:06:46.517 14:47:19 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:46.517 14:47:19 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:46.517 14:47:19 -- common/autotest_common.sh@10 -- # set +x 00:06:46.517 ************************************ 00:06:46.517 START TEST thread 00:06:46.517 ************************************ 00:06:46.517 14:47:19 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:06:46.517 * Looking for test storage... 00:06:46.517 * Found test storage at /home/vagrant/spdk_repo/spdk/test/thread 00:06:46.517 14:47:19 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:06:46.517 14:47:19 -- common/autotest_common.sh@1690 -- # lcov --version 00:06:46.517 14:47:19 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:06:46.517 14:47:19 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:06:46.517 14:47:19 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:06:46.517 14:47:19 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:06:46.517 14:47:19 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:06:46.517 14:47:19 -- scripts/common.sh@335 -- # IFS=.-: 00:06:46.517 14:47:19 -- scripts/common.sh@335 -- # read -ra ver1 00:06:46.517 14:47:19 -- scripts/common.sh@336 -- # IFS=.-: 00:06:46.517 14:47:19 -- scripts/common.sh@336 -- # read -ra ver2 00:06:46.517 14:47:19 -- scripts/common.sh@337 -- # local 'op=<' 00:06:46.518 14:47:19 -- scripts/common.sh@339 -- # ver1_l=2 00:06:46.518 14:47:19 -- scripts/common.sh@340 -- # ver2_l=1 00:06:46.518 14:47:19 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:06:46.518 14:47:19 -- scripts/common.sh@343 -- # case "$op" in 00:06:46.518 14:47:19 -- scripts/common.sh@344 -- # : 1 00:06:46.518 14:47:19 -- scripts/common.sh@363 -- # (( v = 0 )) 00:06:46.518 14:47:19 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:46.518 14:47:19 -- scripts/common.sh@364 -- # decimal 1 00:06:46.518 14:47:19 -- scripts/common.sh@352 -- # local d=1 00:06:46.518 14:47:19 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:46.518 14:47:19 -- scripts/common.sh@354 -- # echo 1 00:06:46.518 14:47:19 -- scripts/common.sh@364 -- # ver1[v]=1 00:06:46.518 14:47:19 -- scripts/common.sh@365 -- # decimal 2 00:06:46.518 14:47:19 -- scripts/common.sh@352 -- # local d=2 00:06:46.518 14:47:19 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:46.518 14:47:19 -- scripts/common.sh@354 -- # echo 2 00:06:46.518 14:47:19 -- scripts/common.sh@365 -- # ver2[v]=2 00:06:46.518 14:47:19 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:06:46.518 14:47:19 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:06:46.518 14:47:19 -- scripts/common.sh@367 -- # return 0 00:06:46.518 14:47:19 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:46.518 14:47:19 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:06:46.518 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:46.518 --rc genhtml_branch_coverage=1 00:06:46.518 --rc genhtml_function_coverage=1 00:06:46.518 --rc genhtml_legend=1 00:06:46.518 --rc geninfo_all_blocks=1 00:06:46.518 --rc geninfo_unexecuted_blocks=1 00:06:46.518 00:06:46.518 ' 00:06:46.518 14:47:19 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:06:46.518 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:46.518 --rc genhtml_branch_coverage=1 00:06:46.518 --rc genhtml_function_coverage=1 00:06:46.518 --rc genhtml_legend=1 00:06:46.518 --rc geninfo_all_blocks=1 00:06:46.518 --rc geninfo_unexecuted_blocks=1 00:06:46.518 00:06:46.518 ' 00:06:46.518 14:47:19 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:06:46.518 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:46.518 --rc genhtml_branch_coverage=1 00:06:46.518 --rc genhtml_function_coverage=1 00:06:46.518 --rc genhtml_legend=1 00:06:46.518 --rc geninfo_all_blocks=1 00:06:46.518 --rc geninfo_unexecuted_blocks=1 00:06:46.518 00:06:46.518 ' 00:06:46.518 14:47:19 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:06:46.518 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:46.518 --rc genhtml_branch_coverage=1 00:06:46.518 --rc genhtml_function_coverage=1 00:06:46.518 --rc genhtml_legend=1 00:06:46.518 --rc geninfo_all_blocks=1 00:06:46.518 --rc geninfo_unexecuted_blocks=1 00:06:46.518 00:06:46.518 ' 00:06:46.518 14:47:19 -- thread/thread.sh@11 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:46.518 14:47:19 -- common/autotest_common.sh@1087 -- # '[' 8 -le 1 ']' 00:06:46.518 14:47:19 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:46.518 14:47:19 -- common/autotest_common.sh@10 -- # set +x 00:06:46.518 ************************************ 00:06:46.518 START TEST thread_poller_perf 00:06:46.518 ************************************ 00:06:46.518 14:47:19 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:46.777 [2024-12-01 14:47:19.642084] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:46.777 [2024-12-01 14:47:19.642188] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70252 ] 00:06:46.777 [2024-12-01 14:47:19.779957] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:46.777 [2024-12-01 14:47:19.849131] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:46.777 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:06:48.155 [2024-12-01T14:47:21.270Z] ====================================== 00:06:48.155 [2024-12-01T14:47:21.270Z] busy:2213127906 (cyc) 00:06:48.155 [2024-12-01T14:47:21.270Z] total_run_count: 359000 00:06:48.155 [2024-12-01T14:47:21.270Z] tsc_hz: 2200000000 (cyc) 00:06:48.155 [2024-12-01T14:47:21.270Z] ====================================== 00:06:48.155 [2024-12-01T14:47:21.270Z] poller_cost: 6164 (cyc), 2801 (nsec) 00:06:48.155 00:06:48.155 real 0m1.289s 00:06:48.155 user 0m1.119s 00:06:48.155 sys 0m0.062s 00:06:48.155 14:47:20 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:48.155 ************************************ 00:06:48.155 END TEST thread_poller_perf 00:06:48.155 ************************************ 00:06:48.155 14:47:20 -- common/autotest_common.sh@10 -- # set +x 00:06:48.155 14:47:20 -- thread/thread.sh@12 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:48.155 14:47:20 -- common/autotest_common.sh@1087 -- # '[' 8 -le 1 ']' 00:06:48.155 14:47:20 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:48.155 14:47:20 -- common/autotest_common.sh@10 -- # set +x 00:06:48.155 ************************************ 00:06:48.155 START TEST thread_poller_perf 00:06:48.155 ************************************ 00:06:48.155 14:47:20 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:48.155 [2024-12-01 14:47:20.982984] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:48.155 [2024-12-01 14:47:20.983091] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70293 ] 00:06:48.155 [2024-12-01 14:47:21.118949] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:48.155 [2024-12-01 14:47:21.168964] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:48.155 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:06:49.531 [2024-12-01T14:47:22.646Z] ====================================== 00:06:49.531 [2024-12-01T14:47:22.646Z] busy:2202461996 (cyc) 00:06:49.531 [2024-12-01T14:47:22.646Z] total_run_count: 5082000 00:06:49.531 [2024-12-01T14:47:22.646Z] tsc_hz: 2200000000 (cyc) 00:06:49.531 [2024-12-01T14:47:22.646Z] ====================================== 00:06:49.531 [2024-12-01T14:47:22.646Z] poller_cost: 433 (cyc), 196 (nsec) 00:06:49.531 00:06:49.531 real 0m1.256s 00:06:49.531 user 0m1.094s 00:06:49.531 sys 0m0.054s 00:06:49.531 14:47:22 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:49.531 14:47:22 -- common/autotest_common.sh@10 -- # set +x 00:06:49.531 ************************************ 00:06:49.531 END TEST thread_poller_perf 00:06:49.531 ************************************ 00:06:49.531 14:47:22 -- thread/thread.sh@17 -- # [[ y != \y ]] 00:06:49.531 00:06:49.531 real 0m2.833s 00:06:49.531 user 0m2.348s 00:06:49.531 sys 0m0.266s 00:06:49.531 14:47:22 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:49.531 14:47:22 -- common/autotest_common.sh@10 -- # set +x 00:06:49.531 ************************************ 00:06:49.531 END TEST thread 00:06:49.531 ************************************ 00:06:49.531 14:47:22 -- spdk/autotest.sh@176 -- # run_test accel /home/vagrant/spdk_repo/spdk/test/accel/accel.sh 00:06:49.531 14:47:22 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:49.531 14:47:22 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:49.531 14:47:22 -- common/autotest_common.sh@10 -- # set +x 00:06:49.531 ************************************ 00:06:49.531 START TEST accel 00:06:49.531 ************************************ 00:06:49.531 14:47:22 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/accel/accel.sh 00:06:49.531 * Looking for test storage... 00:06:49.531 * Found test storage at /home/vagrant/spdk_repo/spdk/test/accel 00:06:49.531 14:47:22 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:06:49.531 14:47:22 -- common/autotest_common.sh@1690 -- # lcov --version 00:06:49.531 14:47:22 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:06:49.531 14:47:22 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:06:49.531 14:47:22 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:06:49.531 14:47:22 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:06:49.531 14:47:22 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:06:49.531 14:47:22 -- scripts/common.sh@335 -- # IFS=.-: 00:06:49.531 14:47:22 -- scripts/common.sh@335 -- # read -ra ver1 00:06:49.531 14:47:22 -- scripts/common.sh@336 -- # IFS=.-: 00:06:49.531 14:47:22 -- scripts/common.sh@336 -- # read -ra ver2 00:06:49.531 14:47:22 -- scripts/common.sh@337 -- # local 'op=<' 00:06:49.531 14:47:22 -- scripts/common.sh@339 -- # ver1_l=2 00:06:49.531 14:47:22 -- scripts/common.sh@340 -- # ver2_l=1 00:06:49.531 14:47:22 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:06:49.531 14:47:22 -- scripts/common.sh@343 -- # case "$op" in 00:06:49.531 14:47:22 -- scripts/common.sh@344 -- # : 1 00:06:49.531 14:47:22 -- scripts/common.sh@363 -- # (( v = 0 )) 00:06:49.531 14:47:22 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:49.531 14:47:22 -- scripts/common.sh@364 -- # decimal 1 00:06:49.531 14:47:22 -- scripts/common.sh@352 -- # local d=1 00:06:49.531 14:47:22 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:49.531 14:47:22 -- scripts/common.sh@354 -- # echo 1 00:06:49.531 14:47:22 -- scripts/common.sh@364 -- # ver1[v]=1 00:06:49.531 14:47:22 -- scripts/common.sh@365 -- # decimal 2 00:06:49.531 14:47:22 -- scripts/common.sh@352 -- # local d=2 00:06:49.531 14:47:22 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:49.531 14:47:22 -- scripts/common.sh@354 -- # echo 2 00:06:49.531 14:47:22 -- scripts/common.sh@365 -- # ver2[v]=2 00:06:49.531 14:47:22 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:06:49.531 14:47:22 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:06:49.531 14:47:22 -- scripts/common.sh@367 -- # return 0 00:06:49.531 14:47:22 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:49.531 14:47:22 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:06:49.531 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:49.531 --rc genhtml_branch_coverage=1 00:06:49.531 --rc genhtml_function_coverage=1 00:06:49.531 --rc genhtml_legend=1 00:06:49.531 --rc geninfo_all_blocks=1 00:06:49.531 --rc geninfo_unexecuted_blocks=1 00:06:49.531 00:06:49.531 ' 00:06:49.531 14:47:22 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:06:49.531 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:49.531 --rc genhtml_branch_coverage=1 00:06:49.531 --rc genhtml_function_coverage=1 00:06:49.531 --rc genhtml_legend=1 00:06:49.531 --rc geninfo_all_blocks=1 00:06:49.531 --rc geninfo_unexecuted_blocks=1 00:06:49.531 00:06:49.531 ' 00:06:49.531 14:47:22 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:06:49.531 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:49.531 --rc genhtml_branch_coverage=1 00:06:49.531 --rc genhtml_function_coverage=1 00:06:49.531 --rc genhtml_legend=1 00:06:49.531 --rc geninfo_all_blocks=1 00:06:49.531 --rc geninfo_unexecuted_blocks=1 00:06:49.531 00:06:49.531 ' 00:06:49.531 14:47:22 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:06:49.531 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:49.531 --rc genhtml_branch_coverage=1 00:06:49.531 --rc genhtml_function_coverage=1 00:06:49.531 --rc genhtml_legend=1 00:06:49.531 --rc geninfo_all_blocks=1 00:06:49.531 --rc geninfo_unexecuted_blocks=1 00:06:49.531 00:06:49.531 ' 00:06:49.531 14:47:22 -- accel/accel.sh@73 -- # declare -A expected_opcs 00:06:49.531 14:47:22 -- accel/accel.sh@74 -- # get_expected_opcs 00:06:49.531 14:47:22 -- accel/accel.sh@57 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:06:49.531 14:47:22 -- accel/accel.sh@59 -- # spdk_tgt_pid=70371 00:06:49.531 14:47:22 -- accel/accel.sh@60 -- # waitforlisten 70371 00:06:49.531 14:47:22 -- accel/accel.sh@58 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -c /dev/fd/63 00:06:49.531 14:47:22 -- common/autotest_common.sh@829 -- # '[' -z 70371 ']' 00:06:49.531 14:47:22 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:49.531 14:47:22 -- accel/accel.sh@58 -- # build_accel_config 00:06:49.531 14:47:22 -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:49.531 14:47:22 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:49.531 14:47:22 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:49.531 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:49.531 14:47:22 -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:49.531 14:47:22 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:49.531 14:47:22 -- common/autotest_common.sh@10 -- # set +x 00:06:49.531 14:47:22 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:49.531 14:47:22 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:49.531 14:47:22 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:49.531 14:47:22 -- accel/accel.sh@41 -- # local IFS=, 00:06:49.531 14:47:22 -- accel/accel.sh@42 -- # jq -r . 00:06:49.531 [2024-12-01 14:47:22.570126] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:49.531 [2024-12-01 14:47:22.570238] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70371 ] 00:06:49.790 [2024-12-01 14:47:22.704862] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:49.790 [2024-12-01 14:47:22.756026] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:49.790 [2024-12-01 14:47:22.756172] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:50.726 14:47:23 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:50.726 14:47:23 -- common/autotest_common.sh@862 -- # return 0 00:06:50.726 14:47:23 -- accel/accel.sh@62 -- # exp_opcs=($($rpc_py accel_get_opc_assignments | jq -r ". | to_entries | map(\"\(.key)=\(.value)\") | .[]")) 00:06:50.726 14:47:23 -- accel/accel.sh@62 -- # rpc_cmd accel_get_opc_assignments 00:06:50.726 14:47:23 -- accel/accel.sh@62 -- # jq -r '. | to_entries | map("\(.key)=\(.value)") | .[]' 00:06:50.726 14:47:23 -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:50.726 14:47:23 -- common/autotest_common.sh@10 -- # set +x 00:06:50.726 14:47:23 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:50.726 14:47:23 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:50.726 14:47:23 -- accel/accel.sh@64 -- # IFS== 00:06:50.726 14:47:23 -- accel/accel.sh@64 -- # read -r opc module 00:06:50.726 14:47:23 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:50.726 14:47:23 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:50.726 14:47:23 -- accel/accel.sh@64 -- # IFS== 00:06:50.726 14:47:23 -- accel/accel.sh@64 -- # read -r opc module 00:06:50.726 14:47:23 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:50.726 14:47:23 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:50.726 14:47:23 -- accel/accel.sh@64 -- # IFS== 00:06:50.726 14:47:23 -- accel/accel.sh@64 -- # read -r opc module 00:06:50.726 14:47:23 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:50.726 14:47:23 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:50.726 14:47:23 -- accel/accel.sh@64 -- # IFS== 00:06:50.726 14:47:23 -- accel/accel.sh@64 -- # read -r opc module 00:06:50.726 14:47:23 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:50.726 14:47:23 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:50.726 14:47:23 -- accel/accel.sh@64 -- # IFS== 00:06:50.726 14:47:23 -- accel/accel.sh@64 -- # read -r opc module 00:06:50.726 14:47:23 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:50.726 14:47:23 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:50.726 14:47:23 -- accel/accel.sh@64 -- # IFS== 00:06:50.726 14:47:23 -- accel/accel.sh@64 -- # read -r opc module 00:06:50.726 14:47:23 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:50.726 14:47:23 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:50.726 14:47:23 -- accel/accel.sh@64 -- # IFS== 00:06:50.726 14:47:23 -- accel/accel.sh@64 -- # read -r opc module 00:06:50.726 14:47:23 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:50.726 14:47:23 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:50.726 14:47:23 -- accel/accel.sh@64 -- # IFS== 00:06:50.726 14:47:23 -- accel/accel.sh@64 -- # read -r opc module 00:06:50.726 14:47:23 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:50.726 14:47:23 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:50.726 14:47:23 -- accel/accel.sh@64 -- # IFS== 00:06:50.726 14:47:23 -- accel/accel.sh@64 -- # read -r opc module 00:06:50.726 14:47:23 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:50.726 14:47:23 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:50.726 14:47:23 -- accel/accel.sh@64 -- # IFS== 00:06:50.726 14:47:23 -- accel/accel.sh@64 -- # read -r opc module 00:06:50.726 14:47:23 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:50.726 14:47:23 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:50.726 14:47:23 -- accel/accel.sh@64 -- # IFS== 00:06:50.726 14:47:23 -- accel/accel.sh@64 -- # read -r opc module 00:06:50.726 14:47:23 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:50.726 14:47:23 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:50.726 14:47:23 -- accel/accel.sh@64 -- # IFS== 00:06:50.726 14:47:23 -- accel/accel.sh@64 -- # read -r opc module 00:06:50.726 14:47:23 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:50.726 14:47:23 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:50.726 14:47:23 -- accel/accel.sh@64 -- # IFS== 00:06:50.726 14:47:23 -- accel/accel.sh@64 -- # read -r opc module 00:06:50.726 14:47:23 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:50.726 14:47:23 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:50.726 14:47:23 -- accel/accel.sh@64 -- # IFS== 00:06:50.726 14:47:23 -- accel/accel.sh@64 -- # read -r opc module 00:06:50.726 14:47:23 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:50.726 14:47:23 -- accel/accel.sh@67 -- # killprocess 70371 00:06:50.726 14:47:23 -- common/autotest_common.sh@936 -- # '[' -z 70371 ']' 00:06:50.726 14:47:23 -- common/autotest_common.sh@940 -- # kill -0 70371 00:06:50.726 14:47:23 -- common/autotest_common.sh@941 -- # uname 00:06:50.726 14:47:23 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:06:50.726 14:47:23 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 70371 00:06:50.726 14:47:23 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:06:50.726 14:47:23 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:06:50.726 killing process with pid 70371 00:06:50.726 14:47:23 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 70371' 00:06:50.726 14:47:23 -- common/autotest_common.sh@955 -- # kill 70371 00:06:50.726 14:47:23 -- common/autotest_common.sh@960 -- # wait 70371 00:06:50.986 14:47:23 -- accel/accel.sh@68 -- # trap - ERR 00:06:50.986 14:47:23 -- accel/accel.sh@81 -- # run_test accel_help accel_perf -h 00:06:50.986 14:47:23 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:06:50.986 14:47:23 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:50.986 14:47:23 -- common/autotest_common.sh@10 -- # set +x 00:06:50.986 14:47:23 -- common/autotest_common.sh@1114 -- # accel_perf -h 00:06:50.986 14:47:23 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -h 00:06:50.986 14:47:23 -- accel/accel.sh@12 -- # build_accel_config 00:06:50.986 14:47:23 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:50.986 14:47:24 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:50.986 14:47:24 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:50.986 14:47:24 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:50.986 14:47:24 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:50.986 14:47:24 -- accel/accel.sh@41 -- # local IFS=, 00:06:50.986 14:47:24 -- accel/accel.sh@42 -- # jq -r . 00:06:50.986 14:47:24 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:50.986 14:47:24 -- common/autotest_common.sh@10 -- # set +x 00:06:50.986 14:47:24 -- accel/accel.sh@83 -- # run_test accel_missing_filename NOT accel_perf -t 1 -w compress 00:06:50.986 14:47:24 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:06:50.986 14:47:24 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:50.986 14:47:24 -- common/autotest_common.sh@10 -- # set +x 00:06:50.986 ************************************ 00:06:50.986 START TEST accel_missing_filename 00:06:50.986 ************************************ 00:06:50.986 14:47:24 -- common/autotest_common.sh@1114 -- # NOT accel_perf -t 1 -w compress 00:06:50.986 14:47:24 -- common/autotest_common.sh@650 -- # local es=0 00:06:50.986 14:47:24 -- common/autotest_common.sh@652 -- # valid_exec_arg accel_perf -t 1 -w compress 00:06:50.986 14:47:24 -- common/autotest_common.sh@638 -- # local arg=accel_perf 00:06:50.986 14:47:24 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:50.986 14:47:24 -- common/autotest_common.sh@642 -- # type -t accel_perf 00:06:50.986 14:47:24 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:50.986 14:47:24 -- common/autotest_common.sh@653 -- # accel_perf -t 1 -w compress 00:06:50.986 14:47:24 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress 00:06:50.986 14:47:24 -- accel/accel.sh@12 -- # build_accel_config 00:06:50.986 14:47:24 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:50.986 14:47:24 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:50.986 14:47:24 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:50.986 14:47:24 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:50.986 14:47:24 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:50.986 14:47:24 -- accel/accel.sh@41 -- # local IFS=, 00:06:50.986 14:47:24 -- accel/accel.sh@42 -- # jq -r . 00:06:50.986 [2024-12-01 14:47:24.095579] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:50.986 [2024-12-01 14:47:24.095667] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70446 ] 00:06:51.245 [2024-12-01 14:47:24.219012] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:51.245 [2024-12-01 14:47:24.266437] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:51.245 [2024-12-01 14:47:24.317150] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:51.505 [2024-12-01 14:47:24.388865] accel_perf.c:1385:main: *ERROR*: ERROR starting application 00:06:51.505 A filename is required. 00:06:51.505 14:47:24 -- common/autotest_common.sh@653 -- # es=234 00:06:51.505 14:47:24 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:51.505 14:47:24 -- common/autotest_common.sh@662 -- # es=106 00:06:51.505 14:47:24 -- common/autotest_common.sh@663 -- # case "$es" in 00:06:51.505 14:47:24 -- common/autotest_common.sh@670 -- # es=1 00:06:51.505 14:47:24 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:51.505 00:06:51.505 real 0m0.372s 00:06:51.505 user 0m0.213s 00:06:51.505 sys 0m0.098s 00:06:51.505 14:47:24 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:51.505 ************************************ 00:06:51.505 END TEST accel_missing_filename 00:06:51.505 14:47:24 -- common/autotest_common.sh@10 -- # set +x 00:06:51.505 ************************************ 00:06:51.505 14:47:24 -- accel/accel.sh@85 -- # run_test accel_compress_verify NOT accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:06:51.505 14:47:24 -- common/autotest_common.sh@1087 -- # '[' 10 -le 1 ']' 00:06:51.505 14:47:24 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:51.505 14:47:24 -- common/autotest_common.sh@10 -- # set +x 00:06:51.505 ************************************ 00:06:51.505 START TEST accel_compress_verify 00:06:51.505 ************************************ 00:06:51.505 14:47:24 -- common/autotest_common.sh@1114 -- # NOT accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:06:51.505 14:47:24 -- common/autotest_common.sh@650 -- # local es=0 00:06:51.505 14:47:24 -- common/autotest_common.sh@652 -- # valid_exec_arg accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:06:51.505 14:47:24 -- common/autotest_common.sh@638 -- # local arg=accel_perf 00:06:51.505 14:47:24 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:51.505 14:47:24 -- common/autotest_common.sh@642 -- # type -t accel_perf 00:06:51.505 14:47:24 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:51.505 14:47:24 -- common/autotest_common.sh@653 -- # accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:06:51.505 14:47:24 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:06:51.505 14:47:24 -- accel/accel.sh@12 -- # build_accel_config 00:06:51.505 14:47:24 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:51.505 14:47:24 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:51.505 14:47:24 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:51.505 14:47:24 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:51.505 14:47:24 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:51.505 14:47:24 -- accel/accel.sh@41 -- # local IFS=, 00:06:51.505 14:47:24 -- accel/accel.sh@42 -- # jq -r . 00:06:51.505 [2024-12-01 14:47:24.525492] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:51.505 [2024-12-01 14:47:24.525577] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70465 ] 00:06:51.763 [2024-12-01 14:47:24.662730] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:51.763 [2024-12-01 14:47:24.718192] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:51.764 [2024-12-01 14:47:24.773920] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:51.764 [2024-12-01 14:47:24.845096] accel_perf.c:1385:main: *ERROR*: ERROR starting application 00:06:52.023 00:06:52.023 Compression does not support the verify option, aborting. 00:06:52.023 14:47:24 -- common/autotest_common.sh@653 -- # es=161 00:06:52.023 14:47:24 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:52.023 14:47:24 -- common/autotest_common.sh@662 -- # es=33 00:06:52.023 14:47:24 -- common/autotest_common.sh@663 -- # case "$es" in 00:06:52.023 14:47:24 -- common/autotest_common.sh@670 -- # es=1 00:06:52.023 14:47:24 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:52.023 00:06:52.023 real 0m0.398s 00:06:52.023 user 0m0.243s 00:06:52.023 sys 0m0.103s 00:06:52.023 14:47:24 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:52.023 ************************************ 00:06:52.023 END TEST accel_compress_verify 00:06:52.023 ************************************ 00:06:52.023 14:47:24 -- common/autotest_common.sh@10 -- # set +x 00:06:52.023 14:47:24 -- accel/accel.sh@87 -- # run_test accel_wrong_workload NOT accel_perf -t 1 -w foobar 00:06:52.023 14:47:24 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:06:52.023 14:47:24 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:52.023 14:47:24 -- common/autotest_common.sh@10 -- # set +x 00:06:52.023 ************************************ 00:06:52.023 START TEST accel_wrong_workload 00:06:52.023 ************************************ 00:06:52.023 14:47:24 -- common/autotest_common.sh@1114 -- # NOT accel_perf -t 1 -w foobar 00:06:52.023 14:47:24 -- common/autotest_common.sh@650 -- # local es=0 00:06:52.023 14:47:24 -- common/autotest_common.sh@652 -- # valid_exec_arg accel_perf -t 1 -w foobar 00:06:52.023 14:47:24 -- common/autotest_common.sh@638 -- # local arg=accel_perf 00:06:52.023 14:47:24 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:52.023 14:47:24 -- common/autotest_common.sh@642 -- # type -t accel_perf 00:06:52.023 14:47:24 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:52.023 14:47:24 -- common/autotest_common.sh@653 -- # accel_perf -t 1 -w foobar 00:06:52.023 14:47:24 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w foobar 00:06:52.023 14:47:24 -- accel/accel.sh@12 -- # build_accel_config 00:06:52.023 14:47:24 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:52.023 14:47:24 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:52.023 14:47:24 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:52.023 14:47:24 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:52.023 14:47:24 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:52.023 14:47:24 -- accel/accel.sh@41 -- # local IFS=, 00:06:52.023 14:47:24 -- accel/accel.sh@42 -- # jq -r . 00:06:52.024 Unsupported workload type: foobar 00:06:52.024 [2024-12-01 14:47:24.974385] app.c:1292:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'w' failed: 1 00:06:52.024 accel_perf options: 00:06:52.024 [-h help message] 00:06:52.024 [-q queue depth per core] 00:06:52.024 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:06:52.024 [-T number of threads per core 00:06:52.024 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:06:52.024 [-t time in seconds] 00:06:52.024 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:06:52.024 [ dif_verify, , dif_generate, dif_generate_copy 00:06:52.024 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:06:52.024 [-l for compress/decompress workloads, name of uncompressed input file 00:06:52.024 [-S for crc32c workload, use this seed value (default 0) 00:06:52.024 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:06:52.024 [-f for fill workload, use this BYTE value (default 255) 00:06:52.024 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:06:52.024 [-y verify result if this switch is on] 00:06:52.024 [-a tasks to allocate per core (default: same value as -q)] 00:06:52.024 Can be used to spread operations across a wider range of memory. 00:06:52.024 14:47:24 -- common/autotest_common.sh@653 -- # es=1 00:06:52.024 14:47:24 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:52.024 14:47:24 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:52.024 14:47:24 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:52.024 00:06:52.024 real 0m0.030s 00:06:52.024 user 0m0.020s 00:06:52.024 sys 0m0.009s 00:06:52.024 14:47:24 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:52.024 14:47:24 -- common/autotest_common.sh@10 -- # set +x 00:06:52.024 ************************************ 00:06:52.024 END TEST accel_wrong_workload 00:06:52.024 ************************************ 00:06:52.024 14:47:25 -- accel/accel.sh@89 -- # run_test accel_negative_buffers NOT accel_perf -t 1 -w xor -y -x -1 00:06:52.024 14:47:25 -- common/autotest_common.sh@1087 -- # '[' 10 -le 1 ']' 00:06:52.024 14:47:25 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:52.024 14:47:25 -- common/autotest_common.sh@10 -- # set +x 00:06:52.024 ************************************ 00:06:52.024 START TEST accel_negative_buffers 00:06:52.024 ************************************ 00:06:52.024 14:47:25 -- common/autotest_common.sh@1114 -- # NOT accel_perf -t 1 -w xor -y -x -1 00:06:52.024 14:47:25 -- common/autotest_common.sh@650 -- # local es=0 00:06:52.024 14:47:25 -- common/autotest_common.sh@652 -- # valid_exec_arg accel_perf -t 1 -w xor -y -x -1 00:06:52.024 14:47:25 -- common/autotest_common.sh@638 -- # local arg=accel_perf 00:06:52.024 14:47:25 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:52.024 14:47:25 -- common/autotest_common.sh@642 -- # type -t accel_perf 00:06:52.024 14:47:25 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:52.024 14:47:25 -- common/autotest_common.sh@653 -- # accel_perf -t 1 -w xor -y -x -1 00:06:52.024 14:47:25 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x -1 00:06:52.024 14:47:25 -- accel/accel.sh@12 -- # build_accel_config 00:06:52.024 14:47:25 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:52.024 14:47:25 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:52.024 14:47:25 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:52.024 14:47:25 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:52.024 14:47:25 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:52.024 14:47:25 -- accel/accel.sh@41 -- # local IFS=, 00:06:52.024 14:47:25 -- accel/accel.sh@42 -- # jq -r . 00:06:52.024 -x option must be non-negative. 00:06:52.024 [2024-12-01 14:47:25.053068] app.c:1292:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'x' failed: 1 00:06:52.024 accel_perf options: 00:06:52.024 [-h help message] 00:06:52.024 [-q queue depth per core] 00:06:52.024 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:06:52.024 [-T number of threads per core 00:06:52.024 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:06:52.024 [-t time in seconds] 00:06:52.024 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:06:52.024 [ dif_verify, , dif_generate, dif_generate_copy 00:06:52.024 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:06:52.024 [-l for compress/decompress workloads, name of uncompressed input file 00:06:52.024 [-S for crc32c workload, use this seed value (default 0) 00:06:52.024 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:06:52.024 [-f for fill workload, use this BYTE value (default 255) 00:06:52.024 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:06:52.024 [-y verify result if this switch is on] 00:06:52.024 [-a tasks to allocate per core (default: same value as -q)] 00:06:52.024 Can be used to spread operations across a wider range of memory. 00:06:52.024 14:47:25 -- common/autotest_common.sh@653 -- # es=1 00:06:52.024 14:47:25 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:52.024 14:47:25 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:52.024 14:47:25 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:52.024 00:06:52.024 real 0m0.027s 00:06:52.024 user 0m0.016s 00:06:52.024 sys 0m0.011s 00:06:52.024 14:47:25 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:52.024 14:47:25 -- common/autotest_common.sh@10 -- # set +x 00:06:52.024 ************************************ 00:06:52.024 END TEST accel_negative_buffers 00:06:52.024 ************************************ 00:06:52.024 14:47:25 -- accel/accel.sh@93 -- # run_test accel_crc32c accel_test -t 1 -w crc32c -S 32 -y 00:06:52.024 14:47:25 -- common/autotest_common.sh@1087 -- # '[' 9 -le 1 ']' 00:06:52.024 14:47:25 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:52.024 14:47:25 -- common/autotest_common.sh@10 -- # set +x 00:06:52.024 ************************************ 00:06:52.024 START TEST accel_crc32c 00:06:52.024 ************************************ 00:06:52.024 14:47:25 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w crc32c -S 32 -y 00:06:52.024 14:47:25 -- accel/accel.sh@16 -- # local accel_opc 00:06:52.024 14:47:25 -- accel/accel.sh@17 -- # local accel_module 00:06:52.024 14:47:25 -- accel/accel.sh@18 -- # accel_perf -t 1 -w crc32c -S 32 -y 00:06:52.024 14:47:25 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -S 32 -y 00:06:52.024 14:47:25 -- accel/accel.sh@12 -- # build_accel_config 00:06:52.024 14:47:25 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:52.024 14:47:25 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:52.024 14:47:25 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:52.024 14:47:25 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:52.024 14:47:25 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:52.024 14:47:25 -- accel/accel.sh@41 -- # local IFS=, 00:06:52.024 14:47:25 -- accel/accel.sh@42 -- # jq -r . 00:06:52.024 [2024-12-01 14:47:25.135774] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:52.024 [2024-12-01 14:47:25.135876] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70529 ] 00:06:52.293 [2024-12-01 14:47:25.271158] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:52.293 [2024-12-01 14:47:25.326797] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:53.674 14:47:26 -- accel/accel.sh@18 -- # out=' 00:06:53.675 SPDK Configuration: 00:06:53.675 Core mask: 0x1 00:06:53.675 00:06:53.675 Accel Perf Configuration: 00:06:53.675 Workload Type: crc32c 00:06:53.675 CRC-32C seed: 32 00:06:53.675 Transfer size: 4096 bytes 00:06:53.675 Vector count 1 00:06:53.675 Module: software 00:06:53.675 Queue depth: 32 00:06:53.675 Allocate depth: 32 00:06:53.675 # threads/core: 1 00:06:53.675 Run time: 1 seconds 00:06:53.675 Verify: Yes 00:06:53.675 00:06:53.675 Running for 1 seconds... 00:06:53.675 00:06:53.675 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:53.675 ------------------------------------------------------------------------------------ 00:06:53.675 0,0 575584/s 2248 MiB/s 0 0 00:06:53.675 ==================================================================================== 00:06:53.675 Total 575584/s 2248 MiB/s 0 0' 00:06:53.675 14:47:26 -- accel/accel.sh@20 -- # IFS=: 00:06:53.675 14:47:26 -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -S 32 -y 00:06:53.675 14:47:26 -- accel/accel.sh@20 -- # read -r var val 00:06:53.675 14:47:26 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -S 32 -y 00:06:53.675 14:47:26 -- accel/accel.sh@12 -- # build_accel_config 00:06:53.675 14:47:26 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:53.675 14:47:26 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:53.675 14:47:26 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:53.675 14:47:26 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:53.675 14:47:26 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:53.675 14:47:26 -- accel/accel.sh@41 -- # local IFS=, 00:06:53.675 14:47:26 -- accel/accel.sh@42 -- # jq -r . 00:06:53.675 [2024-12-01 14:47:26.530550] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:53.675 [2024-12-01 14:47:26.530631] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70543 ] 00:06:53.675 [2024-12-01 14:47:26.665256] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:53.675 [2024-12-01 14:47:26.712552] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:53.675 14:47:26 -- accel/accel.sh@21 -- # val= 00:06:53.675 14:47:26 -- accel/accel.sh@22 -- # case "$var" in 00:06:53.675 14:47:26 -- accel/accel.sh@20 -- # IFS=: 00:06:53.675 14:47:26 -- accel/accel.sh@20 -- # read -r var val 00:06:53.675 14:47:26 -- accel/accel.sh@21 -- # val= 00:06:53.675 14:47:26 -- accel/accel.sh@22 -- # case "$var" in 00:06:53.675 14:47:26 -- accel/accel.sh@20 -- # IFS=: 00:06:53.675 14:47:26 -- accel/accel.sh@20 -- # read -r var val 00:06:53.675 14:47:26 -- accel/accel.sh@21 -- # val=0x1 00:06:53.675 14:47:26 -- accel/accel.sh@22 -- # case "$var" in 00:06:53.675 14:47:26 -- accel/accel.sh@20 -- # IFS=: 00:06:53.675 14:47:26 -- accel/accel.sh@20 -- # read -r var val 00:06:53.675 14:47:26 -- accel/accel.sh@21 -- # val= 00:06:53.675 14:47:26 -- accel/accel.sh@22 -- # case "$var" in 00:06:53.675 14:47:26 -- accel/accel.sh@20 -- # IFS=: 00:06:53.675 14:47:26 -- accel/accel.sh@20 -- # read -r var val 00:06:53.675 14:47:26 -- accel/accel.sh@21 -- # val= 00:06:53.675 14:47:26 -- accel/accel.sh@22 -- # case "$var" in 00:06:53.675 14:47:26 -- accel/accel.sh@20 -- # IFS=: 00:06:53.675 14:47:26 -- accel/accel.sh@20 -- # read -r var val 00:06:53.675 14:47:26 -- accel/accel.sh@21 -- # val=crc32c 00:06:53.675 14:47:26 -- accel/accel.sh@22 -- # case "$var" in 00:06:53.675 14:47:26 -- accel/accel.sh@24 -- # accel_opc=crc32c 00:06:53.675 14:47:26 -- accel/accel.sh@20 -- # IFS=: 00:06:53.675 14:47:26 -- accel/accel.sh@20 -- # read -r var val 00:06:53.675 14:47:26 -- accel/accel.sh@21 -- # val=32 00:06:53.675 14:47:26 -- accel/accel.sh@22 -- # case "$var" in 00:06:53.675 14:47:26 -- accel/accel.sh@20 -- # IFS=: 00:06:53.675 14:47:26 -- accel/accel.sh@20 -- # read -r var val 00:06:53.675 14:47:26 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:53.675 14:47:26 -- accel/accel.sh@22 -- # case "$var" in 00:06:53.675 14:47:26 -- accel/accel.sh@20 -- # IFS=: 00:06:53.675 14:47:26 -- accel/accel.sh@20 -- # read -r var val 00:06:53.675 14:47:26 -- accel/accel.sh@21 -- # val= 00:06:53.675 14:47:26 -- accel/accel.sh@22 -- # case "$var" in 00:06:53.675 14:47:26 -- accel/accel.sh@20 -- # IFS=: 00:06:53.675 14:47:26 -- accel/accel.sh@20 -- # read -r var val 00:06:53.675 14:47:26 -- accel/accel.sh@21 -- # val=software 00:06:53.675 14:47:26 -- accel/accel.sh@22 -- # case "$var" in 00:06:53.675 14:47:26 -- accel/accel.sh@23 -- # accel_module=software 00:06:53.675 14:47:26 -- accel/accel.sh@20 -- # IFS=: 00:06:53.675 14:47:26 -- accel/accel.sh@20 -- # read -r var val 00:06:53.675 14:47:26 -- accel/accel.sh@21 -- # val=32 00:06:53.675 14:47:26 -- accel/accel.sh@22 -- # case "$var" in 00:06:53.675 14:47:26 -- accel/accel.sh@20 -- # IFS=: 00:06:53.675 14:47:26 -- accel/accel.sh@20 -- # read -r var val 00:06:53.675 14:47:26 -- accel/accel.sh@21 -- # val=32 00:06:53.675 14:47:26 -- accel/accel.sh@22 -- # case "$var" in 00:06:53.675 14:47:26 -- accel/accel.sh@20 -- # IFS=: 00:06:53.675 14:47:26 -- accel/accel.sh@20 -- # read -r var val 00:06:53.675 14:47:26 -- accel/accel.sh@21 -- # val=1 00:06:53.675 14:47:26 -- accel/accel.sh@22 -- # case "$var" in 00:06:53.675 14:47:26 -- accel/accel.sh@20 -- # IFS=: 00:06:53.675 14:47:26 -- accel/accel.sh@20 -- # read -r var val 00:06:53.675 14:47:26 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:53.675 14:47:26 -- accel/accel.sh@22 -- # case "$var" in 00:06:53.675 14:47:26 -- accel/accel.sh@20 -- # IFS=: 00:06:53.675 14:47:26 -- accel/accel.sh@20 -- # read -r var val 00:06:53.675 14:47:26 -- accel/accel.sh@21 -- # val=Yes 00:06:53.675 14:47:26 -- accel/accel.sh@22 -- # case "$var" in 00:06:53.675 14:47:26 -- accel/accel.sh@20 -- # IFS=: 00:06:53.675 14:47:26 -- accel/accel.sh@20 -- # read -r var val 00:06:53.675 14:47:26 -- accel/accel.sh@21 -- # val= 00:06:53.675 14:47:26 -- accel/accel.sh@22 -- # case "$var" in 00:06:53.675 14:47:26 -- accel/accel.sh@20 -- # IFS=: 00:06:53.675 14:47:26 -- accel/accel.sh@20 -- # read -r var val 00:06:53.675 14:47:26 -- accel/accel.sh@21 -- # val= 00:06:53.675 14:47:26 -- accel/accel.sh@22 -- # case "$var" in 00:06:53.675 14:47:26 -- accel/accel.sh@20 -- # IFS=: 00:06:53.675 14:47:26 -- accel/accel.sh@20 -- # read -r var val 00:06:55.053 14:47:27 -- accel/accel.sh@21 -- # val= 00:06:55.053 14:47:27 -- accel/accel.sh@22 -- # case "$var" in 00:06:55.053 14:47:27 -- accel/accel.sh@20 -- # IFS=: 00:06:55.053 14:47:27 -- accel/accel.sh@20 -- # read -r var val 00:06:55.053 14:47:27 -- accel/accel.sh@21 -- # val= 00:06:55.053 14:47:27 -- accel/accel.sh@22 -- # case "$var" in 00:06:55.053 14:47:27 -- accel/accel.sh@20 -- # IFS=: 00:06:55.053 14:47:27 -- accel/accel.sh@20 -- # read -r var val 00:06:55.053 14:47:27 -- accel/accel.sh@21 -- # val= 00:06:55.053 14:47:27 -- accel/accel.sh@22 -- # case "$var" in 00:06:55.053 14:47:27 -- accel/accel.sh@20 -- # IFS=: 00:06:55.053 14:47:27 -- accel/accel.sh@20 -- # read -r var val 00:06:55.053 14:47:27 -- accel/accel.sh@21 -- # val= 00:06:55.053 14:47:27 -- accel/accel.sh@22 -- # case "$var" in 00:06:55.053 14:47:27 -- accel/accel.sh@20 -- # IFS=: 00:06:55.053 14:47:27 -- accel/accel.sh@20 -- # read -r var val 00:06:55.053 14:47:27 -- accel/accel.sh@21 -- # val= 00:06:55.053 14:47:27 -- accel/accel.sh@22 -- # case "$var" in 00:06:55.053 14:47:27 -- accel/accel.sh@20 -- # IFS=: 00:06:55.053 14:47:27 -- accel/accel.sh@20 -- # read -r var val 00:06:55.053 14:47:27 -- accel/accel.sh@21 -- # val= 00:06:55.053 14:47:27 -- accel/accel.sh@22 -- # case "$var" in 00:06:55.053 14:47:27 -- accel/accel.sh@20 -- # IFS=: 00:06:55.053 14:47:27 -- accel/accel.sh@20 -- # read -r var val 00:06:55.053 14:47:27 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:55.053 14:47:27 -- accel/accel.sh@28 -- # [[ -n crc32c ]] 00:06:55.053 14:47:27 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:55.053 00:06:55.053 real 0m2.781s 00:06:55.053 user 0m2.356s 00:06:55.053 sys 0m0.227s 00:06:55.053 ************************************ 00:06:55.053 END TEST accel_crc32c 00:06:55.053 ************************************ 00:06:55.053 14:47:27 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:55.053 14:47:27 -- common/autotest_common.sh@10 -- # set +x 00:06:55.053 14:47:27 -- accel/accel.sh@94 -- # run_test accel_crc32c_C2 accel_test -t 1 -w crc32c -y -C 2 00:06:55.053 14:47:27 -- common/autotest_common.sh@1087 -- # '[' 9 -le 1 ']' 00:06:55.053 14:47:27 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:55.053 14:47:27 -- common/autotest_common.sh@10 -- # set +x 00:06:55.053 ************************************ 00:06:55.053 START TEST accel_crc32c_C2 00:06:55.053 ************************************ 00:06:55.053 14:47:27 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w crc32c -y -C 2 00:06:55.053 14:47:27 -- accel/accel.sh@16 -- # local accel_opc 00:06:55.053 14:47:27 -- accel/accel.sh@17 -- # local accel_module 00:06:55.053 14:47:27 -- accel/accel.sh@18 -- # accel_perf -t 1 -w crc32c -y -C 2 00:06:55.053 14:47:27 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -y -C 2 00:06:55.053 14:47:27 -- accel/accel.sh@12 -- # build_accel_config 00:06:55.053 14:47:27 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:55.053 14:47:27 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:55.053 14:47:27 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:55.053 14:47:27 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:55.053 14:47:27 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:55.053 14:47:27 -- accel/accel.sh@41 -- # local IFS=, 00:06:55.053 14:47:27 -- accel/accel.sh@42 -- # jq -r . 00:06:55.053 [2024-12-01 14:47:27.963834] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:55.053 [2024-12-01 14:47:27.963928] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70578 ] 00:06:55.053 [2024-12-01 14:47:28.101577] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:55.053 [2024-12-01 14:47:28.152670] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:56.431 14:47:29 -- accel/accel.sh@18 -- # out=' 00:06:56.431 SPDK Configuration: 00:06:56.431 Core mask: 0x1 00:06:56.431 00:06:56.431 Accel Perf Configuration: 00:06:56.431 Workload Type: crc32c 00:06:56.431 CRC-32C seed: 0 00:06:56.431 Transfer size: 4096 bytes 00:06:56.431 Vector count 2 00:06:56.431 Module: software 00:06:56.431 Queue depth: 32 00:06:56.431 Allocate depth: 32 00:06:56.431 # threads/core: 1 00:06:56.431 Run time: 1 seconds 00:06:56.431 Verify: Yes 00:06:56.431 00:06:56.431 Running for 1 seconds... 00:06:56.431 00:06:56.431 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:56.431 ------------------------------------------------------------------------------------ 00:06:56.431 0,0 443808/s 3467 MiB/s 0 0 00:06:56.431 ==================================================================================== 00:06:56.431 Total 443808/s 1733 MiB/s 0 0' 00:06:56.431 14:47:29 -- accel/accel.sh@20 -- # IFS=: 00:06:56.431 14:47:29 -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -y -C 2 00:06:56.431 14:47:29 -- accel/accel.sh@20 -- # read -r var val 00:06:56.431 14:47:29 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -y -C 2 00:06:56.431 14:47:29 -- accel/accel.sh@12 -- # build_accel_config 00:06:56.431 14:47:29 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:56.431 14:47:29 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:56.431 14:47:29 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:56.431 14:47:29 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:56.431 14:47:29 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:56.431 14:47:29 -- accel/accel.sh@41 -- # local IFS=, 00:06:56.431 14:47:29 -- accel/accel.sh@42 -- # jq -r . 00:06:56.431 [2024-12-01 14:47:29.357500] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:56.431 [2024-12-01 14:47:29.357737] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70597 ] 00:06:56.431 [2024-12-01 14:47:29.494008] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:56.431 [2024-12-01 14:47:29.544141] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:56.691 14:47:29 -- accel/accel.sh@21 -- # val= 00:06:56.691 14:47:29 -- accel/accel.sh@22 -- # case "$var" in 00:06:56.691 14:47:29 -- accel/accel.sh@20 -- # IFS=: 00:06:56.691 14:47:29 -- accel/accel.sh@20 -- # read -r var val 00:06:56.691 14:47:29 -- accel/accel.sh@21 -- # val= 00:06:56.691 14:47:29 -- accel/accel.sh@22 -- # case "$var" in 00:06:56.691 14:47:29 -- accel/accel.sh@20 -- # IFS=: 00:06:56.691 14:47:29 -- accel/accel.sh@20 -- # read -r var val 00:06:56.691 14:47:29 -- accel/accel.sh@21 -- # val=0x1 00:06:56.691 14:47:29 -- accel/accel.sh@22 -- # case "$var" in 00:06:56.691 14:47:29 -- accel/accel.sh@20 -- # IFS=: 00:06:56.691 14:47:29 -- accel/accel.sh@20 -- # read -r var val 00:06:56.691 14:47:29 -- accel/accel.sh@21 -- # val= 00:06:56.691 14:47:29 -- accel/accel.sh@22 -- # case "$var" in 00:06:56.691 14:47:29 -- accel/accel.sh@20 -- # IFS=: 00:06:56.691 14:47:29 -- accel/accel.sh@20 -- # read -r var val 00:06:56.691 14:47:29 -- accel/accel.sh@21 -- # val= 00:06:56.691 14:47:29 -- accel/accel.sh@22 -- # case "$var" in 00:06:56.691 14:47:29 -- accel/accel.sh@20 -- # IFS=: 00:06:56.691 14:47:29 -- accel/accel.sh@20 -- # read -r var val 00:06:56.691 14:47:29 -- accel/accel.sh@21 -- # val=crc32c 00:06:56.691 14:47:29 -- accel/accel.sh@22 -- # case "$var" in 00:06:56.691 14:47:29 -- accel/accel.sh@24 -- # accel_opc=crc32c 00:06:56.691 14:47:29 -- accel/accel.sh@20 -- # IFS=: 00:06:56.691 14:47:29 -- accel/accel.sh@20 -- # read -r var val 00:06:56.691 14:47:29 -- accel/accel.sh@21 -- # val=0 00:06:56.691 14:47:29 -- accel/accel.sh@22 -- # case "$var" in 00:06:56.691 14:47:29 -- accel/accel.sh@20 -- # IFS=: 00:06:56.691 14:47:29 -- accel/accel.sh@20 -- # read -r var val 00:06:56.691 14:47:29 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:56.691 14:47:29 -- accel/accel.sh@22 -- # case "$var" in 00:06:56.691 14:47:29 -- accel/accel.sh@20 -- # IFS=: 00:06:56.691 14:47:29 -- accel/accel.sh@20 -- # read -r var val 00:06:56.692 14:47:29 -- accel/accel.sh@21 -- # val= 00:06:56.692 14:47:29 -- accel/accel.sh@22 -- # case "$var" in 00:06:56.692 14:47:29 -- accel/accel.sh@20 -- # IFS=: 00:06:56.692 14:47:29 -- accel/accel.sh@20 -- # read -r var val 00:06:56.692 14:47:29 -- accel/accel.sh@21 -- # val=software 00:06:56.692 14:47:29 -- accel/accel.sh@22 -- # case "$var" in 00:06:56.692 14:47:29 -- accel/accel.sh@23 -- # accel_module=software 00:06:56.692 14:47:29 -- accel/accel.sh@20 -- # IFS=: 00:06:56.692 14:47:29 -- accel/accel.sh@20 -- # read -r var val 00:06:56.692 14:47:29 -- accel/accel.sh@21 -- # val=32 00:06:56.692 14:47:29 -- accel/accel.sh@22 -- # case "$var" in 00:06:56.692 14:47:29 -- accel/accel.sh@20 -- # IFS=: 00:06:56.692 14:47:29 -- accel/accel.sh@20 -- # read -r var val 00:06:56.692 14:47:29 -- accel/accel.sh@21 -- # val=32 00:06:56.692 14:47:29 -- accel/accel.sh@22 -- # case "$var" in 00:06:56.692 14:47:29 -- accel/accel.sh@20 -- # IFS=: 00:06:56.692 14:47:29 -- accel/accel.sh@20 -- # read -r var val 00:06:56.692 14:47:29 -- accel/accel.sh@21 -- # val=1 00:06:56.692 14:47:29 -- accel/accel.sh@22 -- # case "$var" in 00:06:56.692 14:47:29 -- accel/accel.sh@20 -- # IFS=: 00:06:56.692 14:47:29 -- accel/accel.sh@20 -- # read -r var val 00:06:56.692 14:47:29 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:56.692 14:47:29 -- accel/accel.sh@22 -- # case "$var" in 00:06:56.692 14:47:29 -- accel/accel.sh@20 -- # IFS=: 00:06:56.692 14:47:29 -- accel/accel.sh@20 -- # read -r var val 00:06:56.692 14:47:29 -- accel/accel.sh@21 -- # val=Yes 00:06:56.692 14:47:29 -- accel/accel.sh@22 -- # case "$var" in 00:06:56.692 14:47:29 -- accel/accel.sh@20 -- # IFS=: 00:06:56.692 14:47:29 -- accel/accel.sh@20 -- # read -r var val 00:06:56.692 14:47:29 -- accel/accel.sh@21 -- # val= 00:06:56.692 14:47:29 -- accel/accel.sh@22 -- # case "$var" in 00:06:56.692 14:47:29 -- accel/accel.sh@20 -- # IFS=: 00:06:56.692 14:47:29 -- accel/accel.sh@20 -- # read -r var val 00:06:56.692 14:47:29 -- accel/accel.sh@21 -- # val= 00:06:56.692 14:47:29 -- accel/accel.sh@22 -- # case "$var" in 00:06:56.692 14:47:29 -- accel/accel.sh@20 -- # IFS=: 00:06:56.692 14:47:29 -- accel/accel.sh@20 -- # read -r var val 00:06:57.629 14:47:30 -- accel/accel.sh@21 -- # val= 00:06:57.629 14:47:30 -- accel/accel.sh@22 -- # case "$var" in 00:06:57.629 14:47:30 -- accel/accel.sh@20 -- # IFS=: 00:06:57.629 14:47:30 -- accel/accel.sh@20 -- # read -r var val 00:06:57.629 14:47:30 -- accel/accel.sh@21 -- # val= 00:06:57.629 14:47:30 -- accel/accel.sh@22 -- # case "$var" in 00:06:57.629 14:47:30 -- accel/accel.sh@20 -- # IFS=: 00:06:57.629 14:47:30 -- accel/accel.sh@20 -- # read -r var val 00:06:57.629 14:47:30 -- accel/accel.sh@21 -- # val= 00:06:57.629 14:47:30 -- accel/accel.sh@22 -- # case "$var" in 00:06:57.629 14:47:30 -- accel/accel.sh@20 -- # IFS=: 00:06:57.629 14:47:30 -- accel/accel.sh@20 -- # read -r var val 00:06:57.629 14:47:30 -- accel/accel.sh@21 -- # val= 00:06:57.629 14:47:30 -- accel/accel.sh@22 -- # case "$var" in 00:06:57.629 14:47:30 -- accel/accel.sh@20 -- # IFS=: 00:06:57.629 14:47:30 -- accel/accel.sh@20 -- # read -r var val 00:06:57.629 14:47:30 -- accel/accel.sh@21 -- # val= 00:06:57.629 14:47:30 -- accel/accel.sh@22 -- # case "$var" in 00:06:57.629 14:47:30 -- accel/accel.sh@20 -- # IFS=: 00:06:57.629 14:47:30 -- accel/accel.sh@20 -- # read -r var val 00:06:57.629 14:47:30 -- accel/accel.sh@21 -- # val= 00:06:57.629 14:47:30 -- accel/accel.sh@22 -- # case "$var" in 00:06:57.629 14:47:30 -- accel/accel.sh@20 -- # IFS=: 00:06:57.629 14:47:30 -- accel/accel.sh@20 -- # read -r var val 00:06:57.629 14:47:30 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:57.629 14:47:30 -- accel/accel.sh@28 -- # [[ -n crc32c ]] 00:06:57.629 14:47:30 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:57.629 00:06:57.629 real 0m2.786s 00:06:57.629 user 0m2.383s 00:06:57.629 sys 0m0.202s 00:06:57.629 14:47:30 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:57.629 ************************************ 00:06:57.629 END TEST accel_crc32c_C2 00:06:57.629 ************************************ 00:06:57.629 14:47:30 -- common/autotest_common.sh@10 -- # set +x 00:06:57.890 14:47:30 -- accel/accel.sh@95 -- # run_test accel_copy accel_test -t 1 -w copy -y 00:06:57.890 14:47:30 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:06:57.890 14:47:30 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:57.890 14:47:30 -- common/autotest_common.sh@10 -- # set +x 00:06:57.890 ************************************ 00:06:57.890 START TEST accel_copy 00:06:57.890 ************************************ 00:06:57.890 14:47:30 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w copy -y 00:06:57.890 14:47:30 -- accel/accel.sh@16 -- # local accel_opc 00:06:57.890 14:47:30 -- accel/accel.sh@17 -- # local accel_module 00:06:57.890 14:47:30 -- accel/accel.sh@18 -- # accel_perf -t 1 -w copy -y 00:06:57.890 14:47:30 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy -y 00:06:57.890 14:47:30 -- accel/accel.sh@12 -- # build_accel_config 00:06:57.890 14:47:30 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:57.890 14:47:30 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:57.890 14:47:30 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:57.890 14:47:30 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:57.890 14:47:30 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:57.890 14:47:30 -- accel/accel.sh@41 -- # local IFS=, 00:06:57.890 14:47:30 -- accel/accel.sh@42 -- # jq -r . 00:06:57.890 [2024-12-01 14:47:30.807527] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:57.890 [2024-12-01 14:47:30.808276] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70626 ] 00:06:57.890 [2024-12-01 14:47:30.945469] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:57.890 [2024-12-01 14:47:30.998480] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:59.269 14:47:32 -- accel/accel.sh@18 -- # out=' 00:06:59.269 SPDK Configuration: 00:06:59.269 Core mask: 0x1 00:06:59.269 00:06:59.269 Accel Perf Configuration: 00:06:59.269 Workload Type: copy 00:06:59.269 Transfer size: 4096 bytes 00:06:59.269 Vector count 1 00:06:59.269 Module: software 00:06:59.269 Queue depth: 32 00:06:59.269 Allocate depth: 32 00:06:59.269 # threads/core: 1 00:06:59.269 Run time: 1 seconds 00:06:59.269 Verify: Yes 00:06:59.269 00:06:59.269 Running for 1 seconds... 00:06:59.269 00:06:59.269 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:59.269 ------------------------------------------------------------------------------------ 00:06:59.269 0,0 396704/s 1549 MiB/s 0 0 00:06:59.269 ==================================================================================== 00:06:59.269 Total 396704/s 1549 MiB/s 0 0' 00:06:59.269 14:47:32 -- accel/accel.sh@20 -- # IFS=: 00:06:59.269 14:47:32 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy -y 00:06:59.269 14:47:32 -- accel/accel.sh@20 -- # read -r var val 00:06:59.269 14:47:32 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy -y 00:06:59.269 14:47:32 -- accel/accel.sh@12 -- # build_accel_config 00:06:59.269 14:47:32 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:59.269 14:47:32 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:59.269 14:47:32 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:59.270 14:47:32 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:59.270 14:47:32 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:59.270 14:47:32 -- accel/accel.sh@41 -- # local IFS=, 00:06:59.270 14:47:32 -- accel/accel.sh@42 -- # jq -r . 00:06:59.270 [2024-12-01 14:47:32.201062] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:59.270 [2024-12-01 14:47:32.201159] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70646 ] 00:06:59.270 [2024-12-01 14:47:32.337960] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:59.529 [2024-12-01 14:47:32.386608] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:59.529 14:47:32 -- accel/accel.sh@21 -- # val= 00:06:59.529 14:47:32 -- accel/accel.sh@22 -- # case "$var" in 00:06:59.529 14:47:32 -- accel/accel.sh@20 -- # IFS=: 00:06:59.529 14:47:32 -- accel/accel.sh@20 -- # read -r var val 00:06:59.529 14:47:32 -- accel/accel.sh@21 -- # val= 00:06:59.529 14:47:32 -- accel/accel.sh@22 -- # case "$var" in 00:06:59.529 14:47:32 -- accel/accel.sh@20 -- # IFS=: 00:06:59.529 14:47:32 -- accel/accel.sh@20 -- # read -r var val 00:06:59.529 14:47:32 -- accel/accel.sh@21 -- # val=0x1 00:06:59.529 14:47:32 -- accel/accel.sh@22 -- # case "$var" in 00:06:59.529 14:47:32 -- accel/accel.sh@20 -- # IFS=: 00:06:59.529 14:47:32 -- accel/accel.sh@20 -- # read -r var val 00:06:59.529 14:47:32 -- accel/accel.sh@21 -- # val= 00:06:59.529 14:47:32 -- accel/accel.sh@22 -- # case "$var" in 00:06:59.529 14:47:32 -- accel/accel.sh@20 -- # IFS=: 00:06:59.529 14:47:32 -- accel/accel.sh@20 -- # read -r var val 00:06:59.529 14:47:32 -- accel/accel.sh@21 -- # val= 00:06:59.529 14:47:32 -- accel/accel.sh@22 -- # case "$var" in 00:06:59.529 14:47:32 -- accel/accel.sh@20 -- # IFS=: 00:06:59.529 14:47:32 -- accel/accel.sh@20 -- # read -r var val 00:06:59.529 14:47:32 -- accel/accel.sh@21 -- # val=copy 00:06:59.529 14:47:32 -- accel/accel.sh@22 -- # case "$var" in 00:06:59.529 14:47:32 -- accel/accel.sh@24 -- # accel_opc=copy 00:06:59.529 14:47:32 -- accel/accel.sh@20 -- # IFS=: 00:06:59.529 14:47:32 -- accel/accel.sh@20 -- # read -r var val 00:06:59.529 14:47:32 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:59.529 14:47:32 -- accel/accel.sh@22 -- # case "$var" in 00:06:59.529 14:47:32 -- accel/accel.sh@20 -- # IFS=: 00:06:59.529 14:47:32 -- accel/accel.sh@20 -- # read -r var val 00:06:59.529 14:47:32 -- accel/accel.sh@21 -- # val= 00:06:59.529 14:47:32 -- accel/accel.sh@22 -- # case "$var" in 00:06:59.529 14:47:32 -- accel/accel.sh@20 -- # IFS=: 00:06:59.529 14:47:32 -- accel/accel.sh@20 -- # read -r var val 00:06:59.529 14:47:32 -- accel/accel.sh@21 -- # val=software 00:06:59.529 14:47:32 -- accel/accel.sh@22 -- # case "$var" in 00:06:59.529 14:47:32 -- accel/accel.sh@23 -- # accel_module=software 00:06:59.529 14:47:32 -- accel/accel.sh@20 -- # IFS=: 00:06:59.529 14:47:32 -- accel/accel.sh@20 -- # read -r var val 00:06:59.529 14:47:32 -- accel/accel.sh@21 -- # val=32 00:06:59.529 14:47:32 -- accel/accel.sh@22 -- # case "$var" in 00:06:59.529 14:47:32 -- accel/accel.sh@20 -- # IFS=: 00:06:59.529 14:47:32 -- accel/accel.sh@20 -- # read -r var val 00:06:59.529 14:47:32 -- accel/accel.sh@21 -- # val=32 00:06:59.529 14:47:32 -- accel/accel.sh@22 -- # case "$var" in 00:06:59.529 14:47:32 -- accel/accel.sh@20 -- # IFS=: 00:06:59.529 14:47:32 -- accel/accel.sh@20 -- # read -r var val 00:06:59.529 14:47:32 -- accel/accel.sh@21 -- # val=1 00:06:59.529 14:47:32 -- accel/accel.sh@22 -- # case "$var" in 00:06:59.529 14:47:32 -- accel/accel.sh@20 -- # IFS=: 00:06:59.529 14:47:32 -- accel/accel.sh@20 -- # read -r var val 00:06:59.529 14:47:32 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:59.529 14:47:32 -- accel/accel.sh@22 -- # case "$var" in 00:06:59.529 14:47:32 -- accel/accel.sh@20 -- # IFS=: 00:06:59.529 14:47:32 -- accel/accel.sh@20 -- # read -r var val 00:06:59.529 14:47:32 -- accel/accel.sh@21 -- # val=Yes 00:06:59.529 14:47:32 -- accel/accel.sh@22 -- # case "$var" in 00:06:59.529 14:47:32 -- accel/accel.sh@20 -- # IFS=: 00:06:59.529 14:47:32 -- accel/accel.sh@20 -- # read -r var val 00:06:59.529 14:47:32 -- accel/accel.sh@21 -- # val= 00:06:59.529 14:47:32 -- accel/accel.sh@22 -- # case "$var" in 00:06:59.529 14:47:32 -- accel/accel.sh@20 -- # IFS=: 00:06:59.529 14:47:32 -- accel/accel.sh@20 -- # read -r var val 00:06:59.529 14:47:32 -- accel/accel.sh@21 -- # val= 00:06:59.529 14:47:32 -- accel/accel.sh@22 -- # case "$var" in 00:06:59.529 14:47:32 -- accel/accel.sh@20 -- # IFS=: 00:06:59.529 14:47:32 -- accel/accel.sh@20 -- # read -r var val 00:07:00.494 14:47:33 -- accel/accel.sh@21 -- # val= 00:07:00.494 14:47:33 -- accel/accel.sh@22 -- # case "$var" in 00:07:00.494 14:47:33 -- accel/accel.sh@20 -- # IFS=: 00:07:00.494 14:47:33 -- accel/accel.sh@20 -- # read -r var val 00:07:00.495 14:47:33 -- accel/accel.sh@21 -- # val= 00:07:00.495 14:47:33 -- accel/accel.sh@22 -- # case "$var" in 00:07:00.495 14:47:33 -- accel/accel.sh@20 -- # IFS=: 00:07:00.495 14:47:33 -- accel/accel.sh@20 -- # read -r var val 00:07:00.495 14:47:33 -- accel/accel.sh@21 -- # val= 00:07:00.495 14:47:33 -- accel/accel.sh@22 -- # case "$var" in 00:07:00.495 14:47:33 -- accel/accel.sh@20 -- # IFS=: 00:07:00.495 14:47:33 -- accel/accel.sh@20 -- # read -r var val 00:07:00.495 14:47:33 -- accel/accel.sh@21 -- # val= 00:07:00.495 14:47:33 -- accel/accel.sh@22 -- # case "$var" in 00:07:00.495 14:47:33 -- accel/accel.sh@20 -- # IFS=: 00:07:00.495 14:47:33 -- accel/accel.sh@20 -- # read -r var val 00:07:00.495 14:47:33 -- accel/accel.sh@21 -- # val= 00:07:00.495 14:47:33 -- accel/accel.sh@22 -- # case "$var" in 00:07:00.495 14:47:33 -- accel/accel.sh@20 -- # IFS=: 00:07:00.495 14:47:33 -- accel/accel.sh@20 -- # read -r var val 00:07:00.495 14:47:33 -- accel/accel.sh@21 -- # val= 00:07:00.495 14:47:33 -- accel/accel.sh@22 -- # case "$var" in 00:07:00.495 14:47:33 -- accel/accel.sh@20 -- # IFS=: 00:07:00.495 14:47:33 -- accel/accel.sh@20 -- # read -r var val 00:07:00.495 14:47:33 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:00.495 14:47:33 -- accel/accel.sh@28 -- # [[ -n copy ]] 00:07:00.495 14:47:33 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:00.495 00:07:00.495 real 0m2.782s 00:07:00.495 user 0m2.357s 00:07:00.495 sys 0m0.223s 00:07:00.495 14:47:33 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:00.495 ************************************ 00:07:00.495 END TEST accel_copy 00:07:00.495 ************************************ 00:07:00.495 14:47:33 -- common/autotest_common.sh@10 -- # set +x 00:07:00.753 14:47:33 -- accel/accel.sh@96 -- # run_test accel_fill accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:07:00.753 14:47:33 -- common/autotest_common.sh@1087 -- # '[' 13 -le 1 ']' 00:07:00.753 14:47:33 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:00.753 14:47:33 -- common/autotest_common.sh@10 -- # set +x 00:07:00.753 ************************************ 00:07:00.753 START TEST accel_fill 00:07:00.753 ************************************ 00:07:00.753 14:47:33 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:07:00.753 14:47:33 -- accel/accel.sh@16 -- # local accel_opc 00:07:00.753 14:47:33 -- accel/accel.sh@17 -- # local accel_module 00:07:00.753 14:47:33 -- accel/accel.sh@18 -- # accel_perf -t 1 -w fill -f 128 -q 64 -a 64 -y 00:07:00.753 14:47:33 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w fill -f 128 -q 64 -a 64 -y 00:07:00.753 14:47:33 -- accel/accel.sh@12 -- # build_accel_config 00:07:00.753 14:47:33 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:00.753 14:47:33 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:00.753 14:47:33 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:00.753 14:47:33 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:00.753 14:47:33 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:00.753 14:47:33 -- accel/accel.sh@41 -- # local IFS=, 00:07:00.753 14:47:33 -- accel/accel.sh@42 -- # jq -r . 00:07:00.753 [2024-12-01 14:47:33.646463] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:00.753 [2024-12-01 14:47:33.646572] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70680 ] 00:07:00.753 [2024-12-01 14:47:33.782852] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:00.753 [2024-12-01 14:47:33.837385] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:02.127 14:47:35 -- accel/accel.sh@18 -- # out=' 00:07:02.127 SPDK Configuration: 00:07:02.127 Core mask: 0x1 00:07:02.127 00:07:02.127 Accel Perf Configuration: 00:07:02.127 Workload Type: fill 00:07:02.127 Fill pattern: 0x80 00:07:02.127 Transfer size: 4096 bytes 00:07:02.127 Vector count 1 00:07:02.127 Module: software 00:07:02.127 Queue depth: 64 00:07:02.127 Allocate depth: 64 00:07:02.127 # threads/core: 1 00:07:02.127 Run time: 1 seconds 00:07:02.127 Verify: Yes 00:07:02.127 00:07:02.127 Running for 1 seconds... 00:07:02.127 00:07:02.127 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:02.127 ------------------------------------------------------------------------------------ 00:07:02.127 0,0 580032/s 2265 MiB/s 0 0 00:07:02.127 ==================================================================================== 00:07:02.127 Total 580032/s 2265 MiB/s 0 0' 00:07:02.127 14:47:35 -- accel/accel.sh@20 -- # IFS=: 00:07:02.127 14:47:35 -- accel/accel.sh@15 -- # accel_perf -t 1 -w fill -f 128 -q 64 -a 64 -y 00:07:02.127 14:47:35 -- accel/accel.sh@20 -- # read -r var val 00:07:02.127 14:47:35 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w fill -f 128 -q 64 -a 64 -y 00:07:02.127 14:47:35 -- accel/accel.sh@12 -- # build_accel_config 00:07:02.127 14:47:35 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:02.127 14:47:35 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:02.127 14:47:35 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:02.127 14:47:35 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:02.127 14:47:35 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:02.127 14:47:35 -- accel/accel.sh@41 -- # local IFS=, 00:07:02.127 14:47:35 -- accel/accel.sh@42 -- # jq -r . 00:07:02.127 [2024-12-01 14:47:35.044025] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:02.127 [2024-12-01 14:47:35.044287] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70694 ] 00:07:02.127 [2024-12-01 14:47:35.181779] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:02.127 [2024-12-01 14:47:35.232292] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:02.385 14:47:35 -- accel/accel.sh@21 -- # val= 00:07:02.385 14:47:35 -- accel/accel.sh@22 -- # case "$var" in 00:07:02.385 14:47:35 -- accel/accel.sh@20 -- # IFS=: 00:07:02.385 14:47:35 -- accel/accel.sh@20 -- # read -r var val 00:07:02.385 14:47:35 -- accel/accel.sh@21 -- # val= 00:07:02.385 14:47:35 -- accel/accel.sh@22 -- # case "$var" in 00:07:02.385 14:47:35 -- accel/accel.sh@20 -- # IFS=: 00:07:02.385 14:47:35 -- accel/accel.sh@20 -- # read -r var val 00:07:02.385 14:47:35 -- accel/accel.sh@21 -- # val=0x1 00:07:02.385 14:47:35 -- accel/accel.sh@22 -- # case "$var" in 00:07:02.385 14:47:35 -- accel/accel.sh@20 -- # IFS=: 00:07:02.385 14:47:35 -- accel/accel.sh@20 -- # read -r var val 00:07:02.385 14:47:35 -- accel/accel.sh@21 -- # val= 00:07:02.385 14:47:35 -- accel/accel.sh@22 -- # case "$var" in 00:07:02.385 14:47:35 -- accel/accel.sh@20 -- # IFS=: 00:07:02.385 14:47:35 -- accel/accel.sh@20 -- # read -r var val 00:07:02.385 14:47:35 -- accel/accel.sh@21 -- # val= 00:07:02.385 14:47:35 -- accel/accel.sh@22 -- # case "$var" in 00:07:02.385 14:47:35 -- accel/accel.sh@20 -- # IFS=: 00:07:02.385 14:47:35 -- accel/accel.sh@20 -- # read -r var val 00:07:02.385 14:47:35 -- accel/accel.sh@21 -- # val=fill 00:07:02.385 14:47:35 -- accel/accel.sh@22 -- # case "$var" in 00:07:02.385 14:47:35 -- accel/accel.sh@24 -- # accel_opc=fill 00:07:02.385 14:47:35 -- accel/accel.sh@20 -- # IFS=: 00:07:02.385 14:47:35 -- accel/accel.sh@20 -- # read -r var val 00:07:02.385 14:47:35 -- accel/accel.sh@21 -- # val=0x80 00:07:02.385 14:47:35 -- accel/accel.sh@22 -- # case "$var" in 00:07:02.385 14:47:35 -- accel/accel.sh@20 -- # IFS=: 00:07:02.385 14:47:35 -- accel/accel.sh@20 -- # read -r var val 00:07:02.385 14:47:35 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:02.385 14:47:35 -- accel/accel.sh@22 -- # case "$var" in 00:07:02.385 14:47:35 -- accel/accel.sh@20 -- # IFS=: 00:07:02.385 14:47:35 -- accel/accel.sh@20 -- # read -r var val 00:07:02.385 14:47:35 -- accel/accel.sh@21 -- # val= 00:07:02.385 14:47:35 -- accel/accel.sh@22 -- # case "$var" in 00:07:02.385 14:47:35 -- accel/accel.sh@20 -- # IFS=: 00:07:02.385 14:47:35 -- accel/accel.sh@20 -- # read -r var val 00:07:02.385 14:47:35 -- accel/accel.sh@21 -- # val=software 00:07:02.385 14:47:35 -- accel/accel.sh@22 -- # case "$var" in 00:07:02.385 14:47:35 -- accel/accel.sh@23 -- # accel_module=software 00:07:02.385 14:47:35 -- accel/accel.sh@20 -- # IFS=: 00:07:02.385 14:47:35 -- accel/accel.sh@20 -- # read -r var val 00:07:02.385 14:47:35 -- accel/accel.sh@21 -- # val=64 00:07:02.385 14:47:35 -- accel/accel.sh@22 -- # case "$var" in 00:07:02.385 14:47:35 -- accel/accel.sh@20 -- # IFS=: 00:07:02.385 14:47:35 -- accel/accel.sh@20 -- # read -r var val 00:07:02.385 14:47:35 -- accel/accel.sh@21 -- # val=64 00:07:02.385 14:47:35 -- accel/accel.sh@22 -- # case "$var" in 00:07:02.385 14:47:35 -- accel/accel.sh@20 -- # IFS=: 00:07:02.385 14:47:35 -- accel/accel.sh@20 -- # read -r var val 00:07:02.385 14:47:35 -- accel/accel.sh@21 -- # val=1 00:07:02.385 14:47:35 -- accel/accel.sh@22 -- # case "$var" in 00:07:02.385 14:47:35 -- accel/accel.sh@20 -- # IFS=: 00:07:02.385 14:47:35 -- accel/accel.sh@20 -- # read -r var val 00:07:02.385 14:47:35 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:02.385 14:47:35 -- accel/accel.sh@22 -- # case "$var" in 00:07:02.385 14:47:35 -- accel/accel.sh@20 -- # IFS=: 00:07:02.385 14:47:35 -- accel/accel.sh@20 -- # read -r var val 00:07:02.385 14:47:35 -- accel/accel.sh@21 -- # val=Yes 00:07:02.385 14:47:35 -- accel/accel.sh@22 -- # case "$var" in 00:07:02.385 14:47:35 -- accel/accel.sh@20 -- # IFS=: 00:07:02.385 14:47:35 -- accel/accel.sh@20 -- # read -r var val 00:07:02.385 14:47:35 -- accel/accel.sh@21 -- # val= 00:07:02.385 14:47:35 -- accel/accel.sh@22 -- # case "$var" in 00:07:02.385 14:47:35 -- accel/accel.sh@20 -- # IFS=: 00:07:02.385 14:47:35 -- accel/accel.sh@20 -- # read -r var val 00:07:02.385 14:47:35 -- accel/accel.sh@21 -- # val= 00:07:02.385 14:47:35 -- accel/accel.sh@22 -- # case "$var" in 00:07:02.385 14:47:35 -- accel/accel.sh@20 -- # IFS=: 00:07:02.385 14:47:35 -- accel/accel.sh@20 -- # read -r var val 00:07:03.317 14:47:36 -- accel/accel.sh@21 -- # val= 00:07:03.317 14:47:36 -- accel/accel.sh@22 -- # case "$var" in 00:07:03.317 14:47:36 -- accel/accel.sh@20 -- # IFS=: 00:07:03.317 14:47:36 -- accel/accel.sh@20 -- # read -r var val 00:07:03.317 14:47:36 -- accel/accel.sh@21 -- # val= 00:07:03.317 14:47:36 -- accel/accel.sh@22 -- # case "$var" in 00:07:03.317 14:47:36 -- accel/accel.sh@20 -- # IFS=: 00:07:03.317 14:47:36 -- accel/accel.sh@20 -- # read -r var val 00:07:03.317 14:47:36 -- accel/accel.sh@21 -- # val= 00:07:03.317 14:47:36 -- accel/accel.sh@22 -- # case "$var" in 00:07:03.317 14:47:36 -- accel/accel.sh@20 -- # IFS=: 00:07:03.317 14:47:36 -- accel/accel.sh@20 -- # read -r var val 00:07:03.317 14:47:36 -- accel/accel.sh@21 -- # val= 00:07:03.317 14:47:36 -- accel/accel.sh@22 -- # case "$var" in 00:07:03.317 14:47:36 -- accel/accel.sh@20 -- # IFS=: 00:07:03.317 14:47:36 -- accel/accel.sh@20 -- # read -r var val 00:07:03.317 14:47:36 -- accel/accel.sh@21 -- # val= 00:07:03.317 14:47:36 -- accel/accel.sh@22 -- # case "$var" in 00:07:03.317 14:47:36 -- accel/accel.sh@20 -- # IFS=: 00:07:03.317 14:47:36 -- accel/accel.sh@20 -- # read -r var val 00:07:03.317 14:47:36 -- accel/accel.sh@21 -- # val= 00:07:03.317 14:47:36 -- accel/accel.sh@22 -- # case "$var" in 00:07:03.317 14:47:36 -- accel/accel.sh@20 -- # IFS=: 00:07:03.317 ************************************ 00:07:03.317 END TEST accel_fill 00:07:03.317 ************************************ 00:07:03.317 14:47:36 -- accel/accel.sh@20 -- # read -r var val 00:07:03.317 14:47:36 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:03.317 14:47:36 -- accel/accel.sh@28 -- # [[ -n fill ]] 00:07:03.317 14:47:36 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:03.317 00:07:03.317 real 0m2.794s 00:07:03.317 user 0m2.378s 00:07:03.317 sys 0m0.215s 00:07:03.317 14:47:36 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:03.317 14:47:36 -- common/autotest_common.sh@10 -- # set +x 00:07:03.575 14:47:36 -- accel/accel.sh@97 -- # run_test accel_copy_crc32c accel_test -t 1 -w copy_crc32c -y 00:07:03.575 14:47:36 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:07:03.575 14:47:36 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:03.575 14:47:36 -- common/autotest_common.sh@10 -- # set +x 00:07:03.575 ************************************ 00:07:03.575 START TEST accel_copy_crc32c 00:07:03.575 ************************************ 00:07:03.575 14:47:36 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w copy_crc32c -y 00:07:03.575 14:47:36 -- accel/accel.sh@16 -- # local accel_opc 00:07:03.575 14:47:36 -- accel/accel.sh@17 -- # local accel_module 00:07:03.575 14:47:36 -- accel/accel.sh@18 -- # accel_perf -t 1 -w copy_crc32c -y 00:07:03.575 14:47:36 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y 00:07:03.575 14:47:36 -- accel/accel.sh@12 -- # build_accel_config 00:07:03.575 14:47:36 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:03.575 14:47:36 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:03.575 14:47:36 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:03.575 14:47:36 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:03.575 14:47:36 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:03.575 14:47:36 -- accel/accel.sh@41 -- # local IFS=, 00:07:03.575 14:47:36 -- accel/accel.sh@42 -- # jq -r . 00:07:03.575 [2024-12-01 14:47:36.495856] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:03.575 [2024-12-01 14:47:36.496465] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70733 ] 00:07:03.575 [2024-12-01 14:47:36.630465] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:03.575 [2024-12-01 14:47:36.684552] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:04.953 14:47:37 -- accel/accel.sh@18 -- # out=' 00:07:04.953 SPDK Configuration: 00:07:04.953 Core mask: 0x1 00:07:04.953 00:07:04.953 Accel Perf Configuration: 00:07:04.953 Workload Type: copy_crc32c 00:07:04.953 CRC-32C seed: 0 00:07:04.953 Vector size: 4096 bytes 00:07:04.953 Transfer size: 4096 bytes 00:07:04.953 Vector count 1 00:07:04.953 Module: software 00:07:04.953 Queue depth: 32 00:07:04.953 Allocate depth: 32 00:07:04.953 # threads/core: 1 00:07:04.953 Run time: 1 seconds 00:07:04.953 Verify: Yes 00:07:04.953 00:07:04.953 Running for 1 seconds... 00:07:04.953 00:07:04.953 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:04.953 ------------------------------------------------------------------------------------ 00:07:04.953 0,0 316320/s 1235 MiB/s 0 0 00:07:04.953 ==================================================================================== 00:07:04.953 Total 316320/s 1235 MiB/s 0 0' 00:07:04.953 14:47:37 -- accel/accel.sh@20 -- # IFS=: 00:07:04.953 14:47:37 -- accel/accel.sh@20 -- # read -r var val 00:07:04.953 14:47:37 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y 00:07:04.953 14:47:37 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y 00:07:04.953 14:47:37 -- accel/accel.sh@12 -- # build_accel_config 00:07:04.953 14:47:37 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:04.953 14:47:37 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:04.953 14:47:37 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:04.953 14:47:37 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:04.953 14:47:37 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:04.953 14:47:37 -- accel/accel.sh@41 -- # local IFS=, 00:07:04.953 14:47:37 -- accel/accel.sh@42 -- # jq -r . 00:07:04.953 [2024-12-01 14:47:37.891210] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:04.953 [2024-12-01 14:47:37.891288] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70750 ] 00:07:04.953 [2024-12-01 14:47:38.029647] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:05.212 [2024-12-01 14:47:38.082344] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:05.212 14:47:38 -- accel/accel.sh@21 -- # val= 00:07:05.212 14:47:38 -- accel/accel.sh@22 -- # case "$var" in 00:07:05.212 14:47:38 -- accel/accel.sh@20 -- # IFS=: 00:07:05.212 14:47:38 -- accel/accel.sh@20 -- # read -r var val 00:07:05.212 14:47:38 -- accel/accel.sh@21 -- # val= 00:07:05.212 14:47:38 -- accel/accel.sh@22 -- # case "$var" in 00:07:05.212 14:47:38 -- accel/accel.sh@20 -- # IFS=: 00:07:05.212 14:47:38 -- accel/accel.sh@20 -- # read -r var val 00:07:05.212 14:47:38 -- accel/accel.sh@21 -- # val=0x1 00:07:05.212 14:47:38 -- accel/accel.sh@22 -- # case "$var" in 00:07:05.212 14:47:38 -- accel/accel.sh@20 -- # IFS=: 00:07:05.212 14:47:38 -- accel/accel.sh@20 -- # read -r var val 00:07:05.212 14:47:38 -- accel/accel.sh@21 -- # val= 00:07:05.213 14:47:38 -- accel/accel.sh@22 -- # case "$var" in 00:07:05.213 14:47:38 -- accel/accel.sh@20 -- # IFS=: 00:07:05.213 14:47:38 -- accel/accel.sh@20 -- # read -r var val 00:07:05.213 14:47:38 -- accel/accel.sh@21 -- # val= 00:07:05.213 14:47:38 -- accel/accel.sh@22 -- # case "$var" in 00:07:05.213 14:47:38 -- accel/accel.sh@20 -- # IFS=: 00:07:05.213 14:47:38 -- accel/accel.sh@20 -- # read -r var val 00:07:05.213 14:47:38 -- accel/accel.sh@21 -- # val=copy_crc32c 00:07:05.213 14:47:38 -- accel/accel.sh@22 -- # case "$var" in 00:07:05.213 14:47:38 -- accel/accel.sh@24 -- # accel_opc=copy_crc32c 00:07:05.213 14:47:38 -- accel/accel.sh@20 -- # IFS=: 00:07:05.213 14:47:38 -- accel/accel.sh@20 -- # read -r var val 00:07:05.213 14:47:38 -- accel/accel.sh@21 -- # val=0 00:07:05.213 14:47:38 -- accel/accel.sh@22 -- # case "$var" in 00:07:05.213 14:47:38 -- accel/accel.sh@20 -- # IFS=: 00:07:05.213 14:47:38 -- accel/accel.sh@20 -- # read -r var val 00:07:05.213 14:47:38 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:05.213 14:47:38 -- accel/accel.sh@22 -- # case "$var" in 00:07:05.213 14:47:38 -- accel/accel.sh@20 -- # IFS=: 00:07:05.213 14:47:38 -- accel/accel.sh@20 -- # read -r var val 00:07:05.213 14:47:38 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:05.213 14:47:38 -- accel/accel.sh@22 -- # case "$var" in 00:07:05.213 14:47:38 -- accel/accel.sh@20 -- # IFS=: 00:07:05.213 14:47:38 -- accel/accel.sh@20 -- # read -r var val 00:07:05.213 14:47:38 -- accel/accel.sh@21 -- # val= 00:07:05.213 14:47:38 -- accel/accel.sh@22 -- # case "$var" in 00:07:05.213 14:47:38 -- accel/accel.sh@20 -- # IFS=: 00:07:05.213 14:47:38 -- accel/accel.sh@20 -- # read -r var val 00:07:05.213 14:47:38 -- accel/accel.sh@21 -- # val=software 00:07:05.213 14:47:38 -- accel/accel.sh@22 -- # case "$var" in 00:07:05.213 14:47:38 -- accel/accel.sh@23 -- # accel_module=software 00:07:05.213 14:47:38 -- accel/accel.sh@20 -- # IFS=: 00:07:05.213 14:47:38 -- accel/accel.sh@20 -- # read -r var val 00:07:05.213 14:47:38 -- accel/accel.sh@21 -- # val=32 00:07:05.213 14:47:38 -- accel/accel.sh@22 -- # case "$var" in 00:07:05.213 14:47:38 -- accel/accel.sh@20 -- # IFS=: 00:07:05.213 14:47:38 -- accel/accel.sh@20 -- # read -r var val 00:07:05.213 14:47:38 -- accel/accel.sh@21 -- # val=32 00:07:05.213 14:47:38 -- accel/accel.sh@22 -- # case "$var" in 00:07:05.213 14:47:38 -- accel/accel.sh@20 -- # IFS=: 00:07:05.213 14:47:38 -- accel/accel.sh@20 -- # read -r var val 00:07:05.213 14:47:38 -- accel/accel.sh@21 -- # val=1 00:07:05.213 14:47:38 -- accel/accel.sh@22 -- # case "$var" in 00:07:05.213 14:47:38 -- accel/accel.sh@20 -- # IFS=: 00:07:05.213 14:47:38 -- accel/accel.sh@20 -- # read -r var val 00:07:05.213 14:47:38 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:05.213 14:47:38 -- accel/accel.sh@22 -- # case "$var" in 00:07:05.213 14:47:38 -- accel/accel.sh@20 -- # IFS=: 00:07:05.213 14:47:38 -- accel/accel.sh@20 -- # read -r var val 00:07:05.213 14:47:38 -- accel/accel.sh@21 -- # val=Yes 00:07:05.213 14:47:38 -- accel/accel.sh@22 -- # case "$var" in 00:07:05.213 14:47:38 -- accel/accel.sh@20 -- # IFS=: 00:07:05.213 14:47:38 -- accel/accel.sh@20 -- # read -r var val 00:07:05.213 14:47:38 -- accel/accel.sh@21 -- # val= 00:07:05.213 14:47:38 -- accel/accel.sh@22 -- # case "$var" in 00:07:05.213 14:47:38 -- accel/accel.sh@20 -- # IFS=: 00:07:05.213 14:47:38 -- accel/accel.sh@20 -- # read -r var val 00:07:05.213 14:47:38 -- accel/accel.sh@21 -- # val= 00:07:05.213 14:47:38 -- accel/accel.sh@22 -- # case "$var" in 00:07:05.213 14:47:38 -- accel/accel.sh@20 -- # IFS=: 00:07:05.213 14:47:38 -- accel/accel.sh@20 -- # read -r var val 00:07:06.150 14:47:39 -- accel/accel.sh@21 -- # val= 00:07:06.150 14:47:39 -- accel/accel.sh@22 -- # case "$var" in 00:07:06.150 14:47:39 -- accel/accel.sh@20 -- # IFS=: 00:07:06.150 14:47:39 -- accel/accel.sh@20 -- # read -r var val 00:07:06.150 14:47:39 -- accel/accel.sh@21 -- # val= 00:07:06.150 14:47:39 -- accel/accel.sh@22 -- # case "$var" in 00:07:06.150 14:47:39 -- accel/accel.sh@20 -- # IFS=: 00:07:06.150 14:47:39 -- accel/accel.sh@20 -- # read -r var val 00:07:06.150 14:47:39 -- accel/accel.sh@21 -- # val= 00:07:06.150 14:47:39 -- accel/accel.sh@22 -- # case "$var" in 00:07:06.150 14:47:39 -- accel/accel.sh@20 -- # IFS=: 00:07:06.150 14:47:39 -- accel/accel.sh@20 -- # read -r var val 00:07:06.150 14:47:39 -- accel/accel.sh@21 -- # val= 00:07:06.409 14:47:39 -- accel/accel.sh@22 -- # case "$var" in 00:07:06.409 14:47:39 -- accel/accel.sh@20 -- # IFS=: 00:07:06.409 14:47:39 -- accel/accel.sh@20 -- # read -r var val 00:07:06.409 14:47:39 -- accel/accel.sh@21 -- # val= 00:07:06.409 14:47:39 -- accel/accel.sh@22 -- # case "$var" in 00:07:06.409 14:47:39 -- accel/accel.sh@20 -- # IFS=: 00:07:06.409 14:47:39 -- accel/accel.sh@20 -- # read -r var val 00:07:06.409 14:47:39 -- accel/accel.sh@21 -- # val= 00:07:06.409 14:47:39 -- accel/accel.sh@22 -- # case "$var" in 00:07:06.409 14:47:39 -- accel/accel.sh@20 -- # IFS=: 00:07:06.409 14:47:39 -- accel/accel.sh@20 -- # read -r var val 00:07:06.409 14:47:39 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:06.409 14:47:39 -- accel/accel.sh@28 -- # [[ -n copy_crc32c ]] 00:07:06.409 14:47:39 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:06.409 00:07:06.409 real 0m2.796s 00:07:06.409 user 0m2.369s 00:07:06.409 sys 0m0.222s 00:07:06.409 14:47:39 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:06.409 ************************************ 00:07:06.409 END TEST accel_copy_crc32c 00:07:06.409 ************************************ 00:07:06.409 14:47:39 -- common/autotest_common.sh@10 -- # set +x 00:07:06.409 14:47:39 -- accel/accel.sh@98 -- # run_test accel_copy_crc32c_C2 accel_test -t 1 -w copy_crc32c -y -C 2 00:07:06.409 14:47:39 -- common/autotest_common.sh@1087 -- # '[' 9 -le 1 ']' 00:07:06.409 14:47:39 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:06.409 14:47:39 -- common/autotest_common.sh@10 -- # set +x 00:07:06.409 ************************************ 00:07:06.409 START TEST accel_copy_crc32c_C2 00:07:06.409 ************************************ 00:07:06.409 14:47:39 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w copy_crc32c -y -C 2 00:07:06.409 14:47:39 -- accel/accel.sh@16 -- # local accel_opc 00:07:06.409 14:47:39 -- accel/accel.sh@17 -- # local accel_module 00:07:06.409 14:47:39 -- accel/accel.sh@18 -- # accel_perf -t 1 -w copy_crc32c -y -C 2 00:07:06.409 14:47:39 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y -C 2 00:07:06.409 14:47:39 -- accel/accel.sh@12 -- # build_accel_config 00:07:06.409 14:47:39 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:06.409 14:47:39 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:06.409 14:47:39 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:06.409 14:47:39 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:06.409 14:47:39 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:06.409 14:47:39 -- accel/accel.sh@41 -- # local IFS=, 00:07:06.409 14:47:39 -- accel/accel.sh@42 -- # jq -r . 00:07:06.409 [2024-12-01 14:47:39.340451] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:06.409 [2024-12-01 14:47:39.340541] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70785 ] 00:07:06.409 [2024-12-01 14:47:39.477209] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:06.668 [2024-12-01 14:47:39.530611] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:07.604 14:47:40 -- accel/accel.sh@18 -- # out=' 00:07:07.604 SPDK Configuration: 00:07:07.604 Core mask: 0x1 00:07:07.604 00:07:07.604 Accel Perf Configuration: 00:07:07.604 Workload Type: copy_crc32c 00:07:07.604 CRC-32C seed: 0 00:07:07.604 Vector size: 4096 bytes 00:07:07.604 Transfer size: 8192 bytes 00:07:07.604 Vector count 2 00:07:07.604 Module: software 00:07:07.604 Queue depth: 32 00:07:07.604 Allocate depth: 32 00:07:07.604 # threads/core: 1 00:07:07.604 Run time: 1 seconds 00:07:07.604 Verify: Yes 00:07:07.604 00:07:07.604 Running for 1 seconds... 00:07:07.604 00:07:07.604 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:07.604 ------------------------------------------------------------------------------------ 00:07:07.604 0,0 223712/s 1747 MiB/s 0 0 00:07:07.604 ==================================================================================== 00:07:07.604 Total 223712/s 873 MiB/s 0 0' 00:07:07.604 14:47:40 -- accel/accel.sh@20 -- # IFS=: 00:07:07.604 14:47:40 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y -C 2 00:07:07.604 14:47:40 -- accel/accel.sh@20 -- # read -r var val 00:07:07.604 14:47:40 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y -C 2 00:07:07.604 14:47:40 -- accel/accel.sh@12 -- # build_accel_config 00:07:07.604 14:47:40 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:07.604 14:47:40 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:07.604 14:47:40 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:07.604 14:47:40 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:07.604 14:47:40 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:07.604 14:47:40 -- accel/accel.sh@41 -- # local IFS=, 00:07:07.604 14:47:40 -- accel/accel.sh@42 -- # jq -r . 00:07:07.863 [2024-12-01 14:47:40.726213] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:07.863 [2024-12-01 14:47:40.726519] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70804 ] 00:07:07.863 [2024-12-01 14:47:40.855299] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:07.863 [2024-12-01 14:47:40.905925] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:07.863 14:47:40 -- accel/accel.sh@21 -- # val= 00:07:07.863 14:47:40 -- accel/accel.sh@22 -- # case "$var" in 00:07:07.863 14:47:40 -- accel/accel.sh@20 -- # IFS=: 00:07:07.863 14:47:40 -- accel/accel.sh@20 -- # read -r var val 00:07:07.863 14:47:40 -- accel/accel.sh@21 -- # val= 00:07:07.863 14:47:40 -- accel/accel.sh@22 -- # case "$var" in 00:07:07.863 14:47:40 -- accel/accel.sh@20 -- # IFS=: 00:07:07.863 14:47:40 -- accel/accel.sh@20 -- # read -r var val 00:07:07.863 14:47:40 -- accel/accel.sh@21 -- # val=0x1 00:07:07.863 14:47:40 -- accel/accel.sh@22 -- # case "$var" in 00:07:07.863 14:47:40 -- accel/accel.sh@20 -- # IFS=: 00:07:07.863 14:47:40 -- accel/accel.sh@20 -- # read -r var val 00:07:07.863 14:47:40 -- accel/accel.sh@21 -- # val= 00:07:07.863 14:47:40 -- accel/accel.sh@22 -- # case "$var" in 00:07:07.863 14:47:40 -- accel/accel.sh@20 -- # IFS=: 00:07:07.863 14:47:40 -- accel/accel.sh@20 -- # read -r var val 00:07:07.863 14:47:40 -- accel/accel.sh@21 -- # val= 00:07:07.863 14:47:40 -- accel/accel.sh@22 -- # case "$var" in 00:07:07.863 14:47:40 -- accel/accel.sh@20 -- # IFS=: 00:07:07.863 14:47:40 -- accel/accel.sh@20 -- # read -r var val 00:07:07.863 14:47:40 -- accel/accel.sh@21 -- # val=copy_crc32c 00:07:07.863 14:47:40 -- accel/accel.sh@22 -- # case "$var" in 00:07:07.863 14:47:40 -- accel/accel.sh@24 -- # accel_opc=copy_crc32c 00:07:07.863 14:47:40 -- accel/accel.sh@20 -- # IFS=: 00:07:07.863 14:47:40 -- accel/accel.sh@20 -- # read -r var val 00:07:07.863 14:47:40 -- accel/accel.sh@21 -- # val=0 00:07:07.863 14:47:40 -- accel/accel.sh@22 -- # case "$var" in 00:07:07.863 14:47:40 -- accel/accel.sh@20 -- # IFS=: 00:07:07.863 14:47:40 -- accel/accel.sh@20 -- # read -r var val 00:07:07.863 14:47:40 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:07.863 14:47:40 -- accel/accel.sh@22 -- # case "$var" in 00:07:07.863 14:47:40 -- accel/accel.sh@20 -- # IFS=: 00:07:07.863 14:47:40 -- accel/accel.sh@20 -- # read -r var val 00:07:07.863 14:47:40 -- accel/accel.sh@21 -- # val='8192 bytes' 00:07:07.863 14:47:40 -- accel/accel.sh@22 -- # case "$var" in 00:07:07.863 14:47:40 -- accel/accel.sh@20 -- # IFS=: 00:07:07.863 14:47:40 -- accel/accel.sh@20 -- # read -r var val 00:07:07.863 14:47:40 -- accel/accel.sh@21 -- # val= 00:07:07.863 14:47:40 -- accel/accel.sh@22 -- # case "$var" in 00:07:07.863 14:47:40 -- accel/accel.sh@20 -- # IFS=: 00:07:07.863 14:47:40 -- accel/accel.sh@20 -- # read -r var val 00:07:07.863 14:47:40 -- accel/accel.sh@21 -- # val=software 00:07:07.863 14:47:40 -- accel/accel.sh@22 -- # case "$var" in 00:07:07.863 14:47:40 -- accel/accel.sh@23 -- # accel_module=software 00:07:07.863 14:47:40 -- accel/accel.sh@20 -- # IFS=: 00:07:07.863 14:47:40 -- accel/accel.sh@20 -- # read -r var val 00:07:07.863 14:47:40 -- accel/accel.sh@21 -- # val=32 00:07:07.863 14:47:40 -- accel/accel.sh@22 -- # case "$var" in 00:07:07.863 14:47:40 -- accel/accel.sh@20 -- # IFS=: 00:07:07.863 14:47:40 -- accel/accel.sh@20 -- # read -r var val 00:07:07.863 14:47:40 -- accel/accel.sh@21 -- # val=32 00:07:07.863 14:47:40 -- accel/accel.sh@22 -- # case "$var" in 00:07:07.863 14:47:40 -- accel/accel.sh@20 -- # IFS=: 00:07:07.863 14:47:40 -- accel/accel.sh@20 -- # read -r var val 00:07:07.863 14:47:40 -- accel/accel.sh@21 -- # val=1 00:07:07.863 14:47:40 -- accel/accel.sh@22 -- # case "$var" in 00:07:07.863 14:47:40 -- accel/accel.sh@20 -- # IFS=: 00:07:07.863 14:47:40 -- accel/accel.sh@20 -- # read -r var val 00:07:07.863 14:47:40 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:07.863 14:47:40 -- accel/accel.sh@22 -- # case "$var" in 00:07:07.863 14:47:40 -- accel/accel.sh@20 -- # IFS=: 00:07:07.863 14:47:40 -- accel/accel.sh@20 -- # read -r var val 00:07:07.863 14:47:40 -- accel/accel.sh@21 -- # val=Yes 00:07:07.863 14:47:40 -- accel/accel.sh@22 -- # case "$var" in 00:07:07.863 14:47:40 -- accel/accel.sh@20 -- # IFS=: 00:07:07.863 14:47:40 -- accel/accel.sh@20 -- # read -r var val 00:07:07.863 14:47:40 -- accel/accel.sh@21 -- # val= 00:07:07.863 14:47:40 -- accel/accel.sh@22 -- # case "$var" in 00:07:07.863 14:47:40 -- accel/accel.sh@20 -- # IFS=: 00:07:07.863 14:47:40 -- accel/accel.sh@20 -- # read -r var val 00:07:07.863 14:47:40 -- accel/accel.sh@21 -- # val= 00:07:07.863 14:47:40 -- accel/accel.sh@22 -- # case "$var" in 00:07:07.863 14:47:40 -- accel/accel.sh@20 -- # IFS=: 00:07:07.863 14:47:40 -- accel/accel.sh@20 -- # read -r var val 00:07:09.241 14:47:42 -- accel/accel.sh@21 -- # val= 00:07:09.241 14:47:42 -- accel/accel.sh@22 -- # case "$var" in 00:07:09.241 14:47:42 -- accel/accel.sh@20 -- # IFS=: 00:07:09.241 14:47:42 -- accel/accel.sh@20 -- # read -r var val 00:07:09.241 14:47:42 -- accel/accel.sh@21 -- # val= 00:07:09.241 14:47:42 -- accel/accel.sh@22 -- # case "$var" in 00:07:09.241 14:47:42 -- accel/accel.sh@20 -- # IFS=: 00:07:09.241 14:47:42 -- accel/accel.sh@20 -- # read -r var val 00:07:09.241 14:47:42 -- accel/accel.sh@21 -- # val= 00:07:09.241 14:47:42 -- accel/accel.sh@22 -- # case "$var" in 00:07:09.241 14:47:42 -- accel/accel.sh@20 -- # IFS=: 00:07:09.241 14:47:42 -- accel/accel.sh@20 -- # read -r var val 00:07:09.242 14:47:42 -- accel/accel.sh@21 -- # val= 00:07:09.242 14:47:42 -- accel/accel.sh@22 -- # case "$var" in 00:07:09.242 14:47:42 -- accel/accel.sh@20 -- # IFS=: 00:07:09.242 14:47:42 -- accel/accel.sh@20 -- # read -r var val 00:07:09.242 14:47:42 -- accel/accel.sh@21 -- # val= 00:07:09.242 14:47:42 -- accel/accel.sh@22 -- # case "$var" in 00:07:09.242 14:47:42 -- accel/accel.sh@20 -- # IFS=: 00:07:09.242 14:47:42 -- accel/accel.sh@20 -- # read -r var val 00:07:09.242 14:47:42 -- accel/accel.sh@21 -- # val= 00:07:09.242 14:47:42 -- accel/accel.sh@22 -- # case "$var" in 00:07:09.242 14:47:42 -- accel/accel.sh@20 -- # IFS=: 00:07:09.242 14:47:42 -- accel/accel.sh@20 -- # read -r var val 00:07:09.242 14:47:42 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:09.242 14:47:42 -- accel/accel.sh@28 -- # [[ -n copy_crc32c ]] 00:07:09.242 14:47:42 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:09.242 00:07:09.242 real 0m2.768s 00:07:09.242 user 0m2.360s 00:07:09.242 sys 0m0.210s 00:07:09.242 ************************************ 00:07:09.242 END TEST accel_copy_crc32c_C2 00:07:09.242 ************************************ 00:07:09.242 14:47:42 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:09.242 14:47:42 -- common/autotest_common.sh@10 -- # set +x 00:07:09.242 14:47:42 -- accel/accel.sh@99 -- # run_test accel_dualcast accel_test -t 1 -w dualcast -y 00:07:09.242 14:47:42 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:07:09.242 14:47:42 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:09.242 14:47:42 -- common/autotest_common.sh@10 -- # set +x 00:07:09.242 ************************************ 00:07:09.242 START TEST accel_dualcast 00:07:09.242 ************************************ 00:07:09.242 14:47:42 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w dualcast -y 00:07:09.242 14:47:42 -- accel/accel.sh@16 -- # local accel_opc 00:07:09.242 14:47:42 -- accel/accel.sh@17 -- # local accel_module 00:07:09.242 14:47:42 -- accel/accel.sh@18 -- # accel_perf -t 1 -w dualcast -y 00:07:09.242 14:47:42 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dualcast -y 00:07:09.242 14:47:42 -- accel/accel.sh@12 -- # build_accel_config 00:07:09.242 14:47:42 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:09.242 14:47:42 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:09.242 14:47:42 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:09.242 14:47:42 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:09.242 14:47:42 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:09.242 14:47:42 -- accel/accel.sh@41 -- # local IFS=, 00:07:09.242 14:47:42 -- accel/accel.sh@42 -- # jq -r . 00:07:09.242 [2024-12-01 14:47:42.160844] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:09.242 [2024-12-01 14:47:42.160922] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70833 ] 00:07:09.242 [2024-12-01 14:47:42.287725] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:09.242 [2024-12-01 14:47:42.342167] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:10.620 14:47:43 -- accel/accel.sh@18 -- # out=' 00:07:10.620 SPDK Configuration: 00:07:10.620 Core mask: 0x1 00:07:10.620 00:07:10.620 Accel Perf Configuration: 00:07:10.620 Workload Type: dualcast 00:07:10.620 Transfer size: 4096 bytes 00:07:10.620 Vector count 1 00:07:10.620 Module: software 00:07:10.620 Queue depth: 32 00:07:10.620 Allocate depth: 32 00:07:10.620 # threads/core: 1 00:07:10.620 Run time: 1 seconds 00:07:10.620 Verify: Yes 00:07:10.620 00:07:10.620 Running for 1 seconds... 00:07:10.620 00:07:10.620 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:10.620 ------------------------------------------------------------------------------------ 00:07:10.620 0,0 434080/s 1695 MiB/s 0 0 00:07:10.620 ==================================================================================== 00:07:10.620 Total 434080/s 1695 MiB/s 0 0' 00:07:10.620 14:47:43 -- accel/accel.sh@20 -- # IFS=: 00:07:10.620 14:47:43 -- accel/accel.sh@20 -- # read -r var val 00:07:10.620 14:47:43 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dualcast -y 00:07:10.620 14:47:43 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dualcast -y 00:07:10.620 14:47:43 -- accel/accel.sh@12 -- # build_accel_config 00:07:10.620 14:47:43 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:10.620 14:47:43 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:10.620 14:47:43 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:10.620 14:47:43 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:10.620 14:47:43 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:10.620 14:47:43 -- accel/accel.sh@41 -- # local IFS=, 00:07:10.620 14:47:43 -- accel/accel.sh@42 -- # jq -r . 00:07:10.620 [2024-12-01 14:47:43.549258] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:10.620 [2024-12-01 14:47:43.549502] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70853 ] 00:07:10.620 [2024-12-01 14:47:43.686489] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:10.879 [2024-12-01 14:47:43.738960] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:10.879 14:47:43 -- accel/accel.sh@21 -- # val= 00:07:10.879 14:47:43 -- accel/accel.sh@22 -- # case "$var" in 00:07:10.879 14:47:43 -- accel/accel.sh@20 -- # IFS=: 00:07:10.879 14:47:43 -- accel/accel.sh@20 -- # read -r var val 00:07:10.879 14:47:43 -- accel/accel.sh@21 -- # val= 00:07:10.879 14:47:43 -- accel/accel.sh@22 -- # case "$var" in 00:07:10.879 14:47:43 -- accel/accel.sh@20 -- # IFS=: 00:07:10.879 14:47:43 -- accel/accel.sh@20 -- # read -r var val 00:07:10.879 14:47:43 -- accel/accel.sh@21 -- # val=0x1 00:07:10.879 14:47:43 -- accel/accel.sh@22 -- # case "$var" in 00:07:10.879 14:47:43 -- accel/accel.sh@20 -- # IFS=: 00:07:10.879 14:47:43 -- accel/accel.sh@20 -- # read -r var val 00:07:10.879 14:47:43 -- accel/accel.sh@21 -- # val= 00:07:10.879 14:47:43 -- accel/accel.sh@22 -- # case "$var" in 00:07:10.879 14:47:43 -- accel/accel.sh@20 -- # IFS=: 00:07:10.879 14:47:43 -- accel/accel.sh@20 -- # read -r var val 00:07:10.879 14:47:43 -- accel/accel.sh@21 -- # val= 00:07:10.879 14:47:43 -- accel/accel.sh@22 -- # case "$var" in 00:07:10.879 14:47:43 -- accel/accel.sh@20 -- # IFS=: 00:07:10.879 14:47:43 -- accel/accel.sh@20 -- # read -r var val 00:07:10.879 14:47:43 -- accel/accel.sh@21 -- # val=dualcast 00:07:10.879 14:47:43 -- accel/accel.sh@22 -- # case "$var" in 00:07:10.879 14:47:43 -- accel/accel.sh@24 -- # accel_opc=dualcast 00:07:10.879 14:47:43 -- accel/accel.sh@20 -- # IFS=: 00:07:10.879 14:47:43 -- accel/accel.sh@20 -- # read -r var val 00:07:10.879 14:47:43 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:10.879 14:47:43 -- accel/accel.sh@22 -- # case "$var" in 00:07:10.879 14:47:43 -- accel/accel.sh@20 -- # IFS=: 00:07:10.879 14:47:43 -- accel/accel.sh@20 -- # read -r var val 00:07:10.879 14:47:43 -- accel/accel.sh@21 -- # val= 00:07:10.879 14:47:43 -- accel/accel.sh@22 -- # case "$var" in 00:07:10.879 14:47:43 -- accel/accel.sh@20 -- # IFS=: 00:07:10.879 14:47:43 -- accel/accel.sh@20 -- # read -r var val 00:07:10.879 14:47:43 -- accel/accel.sh@21 -- # val=software 00:07:10.879 14:47:43 -- accel/accel.sh@22 -- # case "$var" in 00:07:10.879 14:47:43 -- accel/accel.sh@23 -- # accel_module=software 00:07:10.879 14:47:43 -- accel/accel.sh@20 -- # IFS=: 00:07:10.879 14:47:43 -- accel/accel.sh@20 -- # read -r var val 00:07:10.880 14:47:43 -- accel/accel.sh@21 -- # val=32 00:07:10.880 14:47:43 -- accel/accel.sh@22 -- # case "$var" in 00:07:10.880 14:47:43 -- accel/accel.sh@20 -- # IFS=: 00:07:10.880 14:47:43 -- accel/accel.sh@20 -- # read -r var val 00:07:10.880 14:47:43 -- accel/accel.sh@21 -- # val=32 00:07:10.880 14:47:43 -- accel/accel.sh@22 -- # case "$var" in 00:07:10.880 14:47:43 -- accel/accel.sh@20 -- # IFS=: 00:07:10.880 14:47:43 -- accel/accel.sh@20 -- # read -r var val 00:07:10.880 14:47:43 -- accel/accel.sh@21 -- # val=1 00:07:10.880 14:47:43 -- accel/accel.sh@22 -- # case "$var" in 00:07:10.880 14:47:43 -- accel/accel.sh@20 -- # IFS=: 00:07:10.880 14:47:43 -- accel/accel.sh@20 -- # read -r var val 00:07:10.880 14:47:43 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:10.880 14:47:43 -- accel/accel.sh@22 -- # case "$var" in 00:07:10.880 14:47:43 -- accel/accel.sh@20 -- # IFS=: 00:07:10.880 14:47:43 -- accel/accel.sh@20 -- # read -r var val 00:07:10.880 14:47:43 -- accel/accel.sh@21 -- # val=Yes 00:07:10.880 14:47:43 -- accel/accel.sh@22 -- # case "$var" in 00:07:10.880 14:47:43 -- accel/accel.sh@20 -- # IFS=: 00:07:10.880 14:47:43 -- accel/accel.sh@20 -- # read -r var val 00:07:10.880 14:47:43 -- accel/accel.sh@21 -- # val= 00:07:10.880 14:47:43 -- accel/accel.sh@22 -- # case "$var" in 00:07:10.880 14:47:43 -- accel/accel.sh@20 -- # IFS=: 00:07:10.880 14:47:43 -- accel/accel.sh@20 -- # read -r var val 00:07:10.880 14:47:43 -- accel/accel.sh@21 -- # val= 00:07:10.880 14:47:43 -- accel/accel.sh@22 -- # case "$var" in 00:07:10.880 14:47:43 -- accel/accel.sh@20 -- # IFS=: 00:07:10.880 14:47:43 -- accel/accel.sh@20 -- # read -r var val 00:07:12.258 14:47:44 -- accel/accel.sh@21 -- # val= 00:07:12.258 14:47:44 -- accel/accel.sh@22 -- # case "$var" in 00:07:12.258 14:47:44 -- accel/accel.sh@20 -- # IFS=: 00:07:12.258 14:47:44 -- accel/accel.sh@20 -- # read -r var val 00:07:12.258 14:47:44 -- accel/accel.sh@21 -- # val= 00:07:12.258 14:47:44 -- accel/accel.sh@22 -- # case "$var" in 00:07:12.258 14:47:44 -- accel/accel.sh@20 -- # IFS=: 00:07:12.258 14:47:44 -- accel/accel.sh@20 -- # read -r var val 00:07:12.258 14:47:44 -- accel/accel.sh@21 -- # val= 00:07:12.258 14:47:44 -- accel/accel.sh@22 -- # case "$var" in 00:07:12.258 14:47:44 -- accel/accel.sh@20 -- # IFS=: 00:07:12.258 14:47:44 -- accel/accel.sh@20 -- # read -r var val 00:07:12.258 14:47:44 -- accel/accel.sh@21 -- # val= 00:07:12.258 14:47:44 -- accel/accel.sh@22 -- # case "$var" in 00:07:12.258 14:47:44 -- accel/accel.sh@20 -- # IFS=: 00:07:12.258 14:47:44 -- accel/accel.sh@20 -- # read -r var val 00:07:12.258 14:47:44 -- accel/accel.sh@21 -- # val= 00:07:12.258 14:47:44 -- accel/accel.sh@22 -- # case "$var" in 00:07:12.258 14:47:44 -- accel/accel.sh@20 -- # IFS=: 00:07:12.258 14:47:44 -- accel/accel.sh@20 -- # read -r var val 00:07:12.258 14:47:44 -- accel/accel.sh@21 -- # val= 00:07:12.258 14:47:44 -- accel/accel.sh@22 -- # case "$var" in 00:07:12.258 14:47:44 -- accel/accel.sh@20 -- # IFS=: 00:07:12.258 14:47:44 -- accel/accel.sh@20 -- # read -r var val 00:07:12.258 14:47:44 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:12.258 14:47:44 -- accel/accel.sh@28 -- # [[ -n dualcast ]] 00:07:12.258 14:47:44 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:12.258 00:07:12.258 real 0m2.797s 00:07:12.258 user 0m2.374s 00:07:12.258 sys 0m0.221s 00:07:12.258 ************************************ 00:07:12.258 END TEST accel_dualcast 00:07:12.258 ************************************ 00:07:12.258 14:47:44 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:12.258 14:47:44 -- common/autotest_common.sh@10 -- # set +x 00:07:12.258 14:47:44 -- accel/accel.sh@100 -- # run_test accel_compare accel_test -t 1 -w compare -y 00:07:12.258 14:47:44 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:07:12.258 14:47:44 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:12.258 14:47:44 -- common/autotest_common.sh@10 -- # set +x 00:07:12.258 ************************************ 00:07:12.258 START TEST accel_compare 00:07:12.258 ************************************ 00:07:12.258 14:47:44 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w compare -y 00:07:12.258 14:47:44 -- accel/accel.sh@16 -- # local accel_opc 00:07:12.258 14:47:44 -- accel/accel.sh@17 -- # local accel_module 00:07:12.258 14:47:44 -- accel/accel.sh@18 -- # accel_perf -t 1 -w compare -y 00:07:12.258 14:47:44 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compare -y 00:07:12.258 14:47:44 -- accel/accel.sh@12 -- # build_accel_config 00:07:12.258 14:47:44 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:12.258 14:47:44 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:12.258 14:47:44 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:12.258 14:47:44 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:12.258 14:47:44 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:12.258 14:47:44 -- accel/accel.sh@41 -- # local IFS=, 00:07:12.258 14:47:44 -- accel/accel.sh@42 -- # jq -r . 00:07:12.258 [2024-12-01 14:47:45.015526] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:12.258 [2024-12-01 14:47:45.015775] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70888 ] 00:07:12.258 [2024-12-01 14:47:45.151094] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:12.258 [2024-12-01 14:47:45.201784] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:13.631 14:47:46 -- accel/accel.sh@18 -- # out=' 00:07:13.631 SPDK Configuration: 00:07:13.631 Core mask: 0x1 00:07:13.631 00:07:13.631 Accel Perf Configuration: 00:07:13.631 Workload Type: compare 00:07:13.631 Transfer size: 4096 bytes 00:07:13.631 Vector count 1 00:07:13.631 Module: software 00:07:13.631 Queue depth: 32 00:07:13.631 Allocate depth: 32 00:07:13.631 # threads/core: 1 00:07:13.631 Run time: 1 seconds 00:07:13.631 Verify: Yes 00:07:13.631 00:07:13.631 Running for 1 seconds... 00:07:13.631 00:07:13.631 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:13.631 ------------------------------------------------------------------------------------ 00:07:13.631 0,0 568800/s 2221 MiB/s 0 0 00:07:13.631 ==================================================================================== 00:07:13.631 Total 568800/s 2221 MiB/s 0 0' 00:07:13.631 14:47:46 -- accel/accel.sh@20 -- # IFS=: 00:07:13.631 14:47:46 -- accel/accel.sh@15 -- # accel_perf -t 1 -w compare -y 00:07:13.631 14:47:46 -- accel/accel.sh@20 -- # read -r var val 00:07:13.631 14:47:46 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compare -y 00:07:13.631 14:47:46 -- accel/accel.sh@12 -- # build_accel_config 00:07:13.631 14:47:46 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:13.631 14:47:46 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:13.631 14:47:46 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:13.631 14:47:46 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:13.631 14:47:46 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:13.631 14:47:46 -- accel/accel.sh@41 -- # local IFS=, 00:07:13.631 14:47:46 -- accel/accel.sh@42 -- # jq -r . 00:07:13.631 [2024-12-01 14:47:46.405079] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:13.631 [2024-12-01 14:47:46.405172] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70908 ] 00:07:13.631 [2024-12-01 14:47:46.539229] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:13.631 [2024-12-01 14:47:46.585878] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:13.631 14:47:46 -- accel/accel.sh@21 -- # val= 00:07:13.631 14:47:46 -- accel/accel.sh@22 -- # case "$var" in 00:07:13.631 14:47:46 -- accel/accel.sh@20 -- # IFS=: 00:07:13.631 14:47:46 -- accel/accel.sh@20 -- # read -r var val 00:07:13.631 14:47:46 -- accel/accel.sh@21 -- # val= 00:07:13.631 14:47:46 -- accel/accel.sh@22 -- # case "$var" in 00:07:13.631 14:47:46 -- accel/accel.sh@20 -- # IFS=: 00:07:13.631 14:47:46 -- accel/accel.sh@20 -- # read -r var val 00:07:13.631 14:47:46 -- accel/accel.sh@21 -- # val=0x1 00:07:13.631 14:47:46 -- accel/accel.sh@22 -- # case "$var" in 00:07:13.631 14:47:46 -- accel/accel.sh@20 -- # IFS=: 00:07:13.631 14:47:46 -- accel/accel.sh@20 -- # read -r var val 00:07:13.631 14:47:46 -- accel/accel.sh@21 -- # val= 00:07:13.631 14:47:46 -- accel/accel.sh@22 -- # case "$var" in 00:07:13.631 14:47:46 -- accel/accel.sh@20 -- # IFS=: 00:07:13.631 14:47:46 -- accel/accel.sh@20 -- # read -r var val 00:07:13.631 14:47:46 -- accel/accel.sh@21 -- # val= 00:07:13.631 14:47:46 -- accel/accel.sh@22 -- # case "$var" in 00:07:13.632 14:47:46 -- accel/accel.sh@20 -- # IFS=: 00:07:13.632 14:47:46 -- accel/accel.sh@20 -- # read -r var val 00:07:13.632 14:47:46 -- accel/accel.sh@21 -- # val=compare 00:07:13.632 14:47:46 -- accel/accel.sh@22 -- # case "$var" in 00:07:13.632 14:47:46 -- accel/accel.sh@24 -- # accel_opc=compare 00:07:13.632 14:47:46 -- accel/accel.sh@20 -- # IFS=: 00:07:13.632 14:47:46 -- accel/accel.sh@20 -- # read -r var val 00:07:13.632 14:47:46 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:13.632 14:47:46 -- accel/accel.sh@22 -- # case "$var" in 00:07:13.632 14:47:46 -- accel/accel.sh@20 -- # IFS=: 00:07:13.632 14:47:46 -- accel/accel.sh@20 -- # read -r var val 00:07:13.632 14:47:46 -- accel/accel.sh@21 -- # val= 00:07:13.632 14:47:46 -- accel/accel.sh@22 -- # case "$var" in 00:07:13.632 14:47:46 -- accel/accel.sh@20 -- # IFS=: 00:07:13.632 14:47:46 -- accel/accel.sh@20 -- # read -r var val 00:07:13.632 14:47:46 -- accel/accel.sh@21 -- # val=software 00:07:13.632 14:47:46 -- accel/accel.sh@22 -- # case "$var" in 00:07:13.632 14:47:46 -- accel/accel.sh@23 -- # accel_module=software 00:07:13.632 14:47:46 -- accel/accel.sh@20 -- # IFS=: 00:07:13.632 14:47:46 -- accel/accel.sh@20 -- # read -r var val 00:07:13.632 14:47:46 -- accel/accel.sh@21 -- # val=32 00:07:13.632 14:47:46 -- accel/accel.sh@22 -- # case "$var" in 00:07:13.632 14:47:46 -- accel/accel.sh@20 -- # IFS=: 00:07:13.632 14:47:46 -- accel/accel.sh@20 -- # read -r var val 00:07:13.632 14:47:46 -- accel/accel.sh@21 -- # val=32 00:07:13.632 14:47:46 -- accel/accel.sh@22 -- # case "$var" in 00:07:13.632 14:47:46 -- accel/accel.sh@20 -- # IFS=: 00:07:13.632 14:47:46 -- accel/accel.sh@20 -- # read -r var val 00:07:13.632 14:47:46 -- accel/accel.sh@21 -- # val=1 00:07:13.632 14:47:46 -- accel/accel.sh@22 -- # case "$var" in 00:07:13.632 14:47:46 -- accel/accel.sh@20 -- # IFS=: 00:07:13.632 14:47:46 -- accel/accel.sh@20 -- # read -r var val 00:07:13.632 14:47:46 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:13.632 14:47:46 -- accel/accel.sh@22 -- # case "$var" in 00:07:13.632 14:47:46 -- accel/accel.sh@20 -- # IFS=: 00:07:13.632 14:47:46 -- accel/accel.sh@20 -- # read -r var val 00:07:13.632 14:47:46 -- accel/accel.sh@21 -- # val=Yes 00:07:13.632 14:47:46 -- accel/accel.sh@22 -- # case "$var" in 00:07:13.632 14:47:46 -- accel/accel.sh@20 -- # IFS=: 00:07:13.632 14:47:46 -- accel/accel.sh@20 -- # read -r var val 00:07:13.632 14:47:46 -- accel/accel.sh@21 -- # val= 00:07:13.632 14:47:46 -- accel/accel.sh@22 -- # case "$var" in 00:07:13.632 14:47:46 -- accel/accel.sh@20 -- # IFS=: 00:07:13.632 14:47:46 -- accel/accel.sh@20 -- # read -r var val 00:07:13.632 14:47:46 -- accel/accel.sh@21 -- # val= 00:07:13.632 14:47:46 -- accel/accel.sh@22 -- # case "$var" in 00:07:13.632 14:47:46 -- accel/accel.sh@20 -- # IFS=: 00:07:13.632 14:47:46 -- accel/accel.sh@20 -- # read -r var val 00:07:15.007 14:47:47 -- accel/accel.sh@21 -- # val= 00:07:15.007 14:47:47 -- accel/accel.sh@22 -- # case "$var" in 00:07:15.007 14:47:47 -- accel/accel.sh@20 -- # IFS=: 00:07:15.007 14:47:47 -- accel/accel.sh@20 -- # read -r var val 00:07:15.007 14:47:47 -- accel/accel.sh@21 -- # val= 00:07:15.007 14:47:47 -- accel/accel.sh@22 -- # case "$var" in 00:07:15.007 14:47:47 -- accel/accel.sh@20 -- # IFS=: 00:07:15.007 14:47:47 -- accel/accel.sh@20 -- # read -r var val 00:07:15.007 14:47:47 -- accel/accel.sh@21 -- # val= 00:07:15.007 14:47:47 -- accel/accel.sh@22 -- # case "$var" in 00:07:15.007 14:47:47 -- accel/accel.sh@20 -- # IFS=: 00:07:15.007 14:47:47 -- accel/accel.sh@20 -- # read -r var val 00:07:15.007 14:47:47 -- accel/accel.sh@21 -- # val= 00:07:15.007 14:47:47 -- accel/accel.sh@22 -- # case "$var" in 00:07:15.007 14:47:47 -- accel/accel.sh@20 -- # IFS=: 00:07:15.007 14:47:47 -- accel/accel.sh@20 -- # read -r var val 00:07:15.007 14:47:47 -- accel/accel.sh@21 -- # val= 00:07:15.007 14:47:47 -- accel/accel.sh@22 -- # case "$var" in 00:07:15.007 14:47:47 -- accel/accel.sh@20 -- # IFS=: 00:07:15.007 14:47:47 -- accel/accel.sh@20 -- # read -r var val 00:07:15.007 14:47:47 -- accel/accel.sh@21 -- # val= 00:07:15.007 14:47:47 -- accel/accel.sh@22 -- # case "$var" in 00:07:15.007 14:47:47 -- accel/accel.sh@20 -- # IFS=: 00:07:15.007 14:47:47 -- accel/accel.sh@20 -- # read -r var val 00:07:15.007 14:47:47 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:15.007 14:47:47 -- accel/accel.sh@28 -- # [[ -n compare ]] 00:07:15.007 14:47:47 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:15.007 00:07:15.007 real 0m2.777s 00:07:15.007 user 0m2.368s 00:07:15.007 sys 0m0.207s 00:07:15.007 14:47:47 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:15.007 ************************************ 00:07:15.007 END TEST accel_compare 00:07:15.007 ************************************ 00:07:15.007 14:47:47 -- common/autotest_common.sh@10 -- # set +x 00:07:15.007 14:47:47 -- accel/accel.sh@101 -- # run_test accel_xor accel_test -t 1 -w xor -y 00:07:15.007 14:47:47 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:07:15.007 14:47:47 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:15.007 14:47:47 -- common/autotest_common.sh@10 -- # set +x 00:07:15.007 ************************************ 00:07:15.007 START TEST accel_xor 00:07:15.007 ************************************ 00:07:15.007 14:47:47 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w xor -y 00:07:15.007 14:47:47 -- accel/accel.sh@16 -- # local accel_opc 00:07:15.007 14:47:47 -- accel/accel.sh@17 -- # local accel_module 00:07:15.007 14:47:47 -- accel/accel.sh@18 -- # accel_perf -t 1 -w xor -y 00:07:15.007 14:47:47 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y 00:07:15.007 14:47:47 -- accel/accel.sh@12 -- # build_accel_config 00:07:15.007 14:47:47 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:15.007 14:47:47 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:15.007 14:47:47 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:15.007 14:47:47 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:15.007 14:47:47 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:15.007 14:47:47 -- accel/accel.sh@41 -- # local IFS=, 00:07:15.007 14:47:47 -- accel/accel.sh@42 -- # jq -r . 00:07:15.007 [2024-12-01 14:47:47.844632] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:15.007 [2024-12-01 14:47:47.844718] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70937 ] 00:07:15.007 [2024-12-01 14:47:47.974682] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:15.007 [2024-12-01 14:47:48.027863] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:16.379 14:47:49 -- accel/accel.sh@18 -- # out=' 00:07:16.379 SPDK Configuration: 00:07:16.379 Core mask: 0x1 00:07:16.379 00:07:16.379 Accel Perf Configuration: 00:07:16.379 Workload Type: xor 00:07:16.379 Source buffers: 2 00:07:16.379 Transfer size: 4096 bytes 00:07:16.379 Vector count 1 00:07:16.379 Module: software 00:07:16.379 Queue depth: 32 00:07:16.379 Allocate depth: 32 00:07:16.379 # threads/core: 1 00:07:16.379 Run time: 1 seconds 00:07:16.379 Verify: Yes 00:07:16.379 00:07:16.379 Running for 1 seconds... 00:07:16.379 00:07:16.379 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:16.379 ------------------------------------------------------------------------------------ 00:07:16.379 0,0 301408/s 1177 MiB/s 0 0 00:07:16.379 ==================================================================================== 00:07:16.379 Total 301408/s 1177 MiB/s 0 0' 00:07:16.379 14:47:49 -- accel/accel.sh@20 -- # IFS=: 00:07:16.379 14:47:49 -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y 00:07:16.379 14:47:49 -- accel/accel.sh@20 -- # read -r var val 00:07:16.379 14:47:49 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y 00:07:16.379 14:47:49 -- accel/accel.sh@12 -- # build_accel_config 00:07:16.379 14:47:49 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:16.379 14:47:49 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:16.379 14:47:49 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:16.379 14:47:49 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:16.379 14:47:49 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:16.379 14:47:49 -- accel/accel.sh@41 -- # local IFS=, 00:07:16.379 14:47:49 -- accel/accel.sh@42 -- # jq -r . 00:07:16.379 [2024-12-01 14:47:49.249412] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:16.379 [2024-12-01 14:47:49.249501] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70956 ] 00:07:16.379 [2024-12-01 14:47:49.383983] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:16.379 [2024-12-01 14:47:49.437555] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:16.637 14:47:49 -- accel/accel.sh@21 -- # val= 00:07:16.637 14:47:49 -- accel/accel.sh@22 -- # case "$var" in 00:07:16.637 14:47:49 -- accel/accel.sh@20 -- # IFS=: 00:07:16.637 14:47:49 -- accel/accel.sh@20 -- # read -r var val 00:07:16.637 14:47:49 -- accel/accel.sh@21 -- # val= 00:07:16.637 14:47:49 -- accel/accel.sh@22 -- # case "$var" in 00:07:16.637 14:47:49 -- accel/accel.sh@20 -- # IFS=: 00:07:16.637 14:47:49 -- accel/accel.sh@20 -- # read -r var val 00:07:16.637 14:47:49 -- accel/accel.sh@21 -- # val=0x1 00:07:16.637 14:47:49 -- accel/accel.sh@22 -- # case "$var" in 00:07:16.637 14:47:49 -- accel/accel.sh@20 -- # IFS=: 00:07:16.637 14:47:49 -- accel/accel.sh@20 -- # read -r var val 00:07:16.637 14:47:49 -- accel/accel.sh@21 -- # val= 00:07:16.637 14:47:49 -- accel/accel.sh@22 -- # case "$var" in 00:07:16.637 14:47:49 -- accel/accel.sh@20 -- # IFS=: 00:07:16.637 14:47:49 -- accel/accel.sh@20 -- # read -r var val 00:07:16.637 14:47:49 -- accel/accel.sh@21 -- # val= 00:07:16.637 14:47:49 -- accel/accel.sh@22 -- # case "$var" in 00:07:16.637 14:47:49 -- accel/accel.sh@20 -- # IFS=: 00:07:16.637 14:47:49 -- accel/accel.sh@20 -- # read -r var val 00:07:16.637 14:47:49 -- accel/accel.sh@21 -- # val=xor 00:07:16.637 14:47:49 -- accel/accel.sh@22 -- # case "$var" in 00:07:16.637 14:47:49 -- accel/accel.sh@24 -- # accel_opc=xor 00:07:16.637 14:47:49 -- accel/accel.sh@20 -- # IFS=: 00:07:16.637 14:47:49 -- accel/accel.sh@20 -- # read -r var val 00:07:16.637 14:47:49 -- accel/accel.sh@21 -- # val=2 00:07:16.637 14:47:49 -- accel/accel.sh@22 -- # case "$var" in 00:07:16.637 14:47:49 -- accel/accel.sh@20 -- # IFS=: 00:07:16.637 14:47:49 -- accel/accel.sh@20 -- # read -r var val 00:07:16.637 14:47:49 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:16.637 14:47:49 -- accel/accel.sh@22 -- # case "$var" in 00:07:16.637 14:47:49 -- accel/accel.sh@20 -- # IFS=: 00:07:16.637 14:47:49 -- accel/accel.sh@20 -- # read -r var val 00:07:16.637 14:47:49 -- accel/accel.sh@21 -- # val= 00:07:16.637 14:47:49 -- accel/accel.sh@22 -- # case "$var" in 00:07:16.637 14:47:49 -- accel/accel.sh@20 -- # IFS=: 00:07:16.637 14:47:49 -- accel/accel.sh@20 -- # read -r var val 00:07:16.637 14:47:49 -- accel/accel.sh@21 -- # val=software 00:07:16.637 14:47:49 -- accel/accel.sh@22 -- # case "$var" in 00:07:16.637 14:47:49 -- accel/accel.sh@23 -- # accel_module=software 00:07:16.637 14:47:49 -- accel/accel.sh@20 -- # IFS=: 00:07:16.637 14:47:49 -- accel/accel.sh@20 -- # read -r var val 00:07:16.637 14:47:49 -- accel/accel.sh@21 -- # val=32 00:07:16.637 14:47:49 -- accel/accel.sh@22 -- # case "$var" in 00:07:16.637 14:47:49 -- accel/accel.sh@20 -- # IFS=: 00:07:16.637 14:47:49 -- accel/accel.sh@20 -- # read -r var val 00:07:16.637 14:47:49 -- accel/accel.sh@21 -- # val=32 00:07:16.637 14:47:49 -- accel/accel.sh@22 -- # case "$var" in 00:07:16.637 14:47:49 -- accel/accel.sh@20 -- # IFS=: 00:07:16.637 14:47:49 -- accel/accel.sh@20 -- # read -r var val 00:07:16.638 14:47:49 -- accel/accel.sh@21 -- # val=1 00:07:16.638 14:47:49 -- accel/accel.sh@22 -- # case "$var" in 00:07:16.638 14:47:49 -- accel/accel.sh@20 -- # IFS=: 00:07:16.638 14:47:49 -- accel/accel.sh@20 -- # read -r var val 00:07:16.638 14:47:49 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:16.638 14:47:49 -- accel/accel.sh@22 -- # case "$var" in 00:07:16.638 14:47:49 -- accel/accel.sh@20 -- # IFS=: 00:07:16.638 14:47:49 -- accel/accel.sh@20 -- # read -r var val 00:07:16.638 14:47:49 -- accel/accel.sh@21 -- # val=Yes 00:07:16.638 14:47:49 -- accel/accel.sh@22 -- # case "$var" in 00:07:16.638 14:47:49 -- accel/accel.sh@20 -- # IFS=: 00:07:16.638 14:47:49 -- accel/accel.sh@20 -- # read -r var val 00:07:16.638 14:47:49 -- accel/accel.sh@21 -- # val= 00:07:16.638 14:47:49 -- accel/accel.sh@22 -- # case "$var" in 00:07:16.638 14:47:49 -- accel/accel.sh@20 -- # IFS=: 00:07:16.638 14:47:49 -- accel/accel.sh@20 -- # read -r var val 00:07:16.638 14:47:49 -- accel/accel.sh@21 -- # val= 00:07:16.638 14:47:49 -- accel/accel.sh@22 -- # case "$var" in 00:07:16.638 14:47:49 -- accel/accel.sh@20 -- # IFS=: 00:07:16.638 14:47:49 -- accel/accel.sh@20 -- # read -r var val 00:07:17.574 14:47:50 -- accel/accel.sh@21 -- # val= 00:07:17.574 14:47:50 -- accel/accel.sh@22 -- # case "$var" in 00:07:17.574 14:47:50 -- accel/accel.sh@20 -- # IFS=: 00:07:17.574 14:47:50 -- accel/accel.sh@20 -- # read -r var val 00:07:17.574 14:47:50 -- accel/accel.sh@21 -- # val= 00:07:17.574 14:47:50 -- accel/accel.sh@22 -- # case "$var" in 00:07:17.574 14:47:50 -- accel/accel.sh@20 -- # IFS=: 00:07:17.574 14:47:50 -- accel/accel.sh@20 -- # read -r var val 00:07:17.574 14:47:50 -- accel/accel.sh@21 -- # val= 00:07:17.574 14:47:50 -- accel/accel.sh@22 -- # case "$var" in 00:07:17.574 14:47:50 -- accel/accel.sh@20 -- # IFS=: 00:07:17.574 14:47:50 -- accel/accel.sh@20 -- # read -r var val 00:07:17.574 14:47:50 -- accel/accel.sh@21 -- # val= 00:07:17.574 14:47:50 -- accel/accel.sh@22 -- # case "$var" in 00:07:17.574 14:47:50 -- accel/accel.sh@20 -- # IFS=: 00:07:17.574 14:47:50 -- accel/accel.sh@20 -- # read -r var val 00:07:17.574 14:47:50 -- accel/accel.sh@21 -- # val= 00:07:17.574 14:47:50 -- accel/accel.sh@22 -- # case "$var" in 00:07:17.574 14:47:50 -- accel/accel.sh@20 -- # IFS=: 00:07:17.574 14:47:50 -- accel/accel.sh@20 -- # read -r var val 00:07:17.574 14:47:50 -- accel/accel.sh@21 -- # val= 00:07:17.574 14:47:50 -- accel/accel.sh@22 -- # case "$var" in 00:07:17.574 14:47:50 -- accel/accel.sh@20 -- # IFS=: 00:07:17.574 14:47:50 -- accel/accel.sh@20 -- # read -r var val 00:07:17.574 14:47:50 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:17.574 14:47:50 -- accel/accel.sh@28 -- # [[ -n xor ]] 00:07:17.574 14:47:50 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:17.574 ************************************ 00:07:17.574 END TEST accel_xor 00:07:17.574 ************************************ 00:07:17.574 00:07:17.574 real 0m2.804s 00:07:17.574 user 0m2.381s 00:07:17.574 sys 0m0.222s 00:07:17.574 14:47:50 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:17.574 14:47:50 -- common/autotest_common.sh@10 -- # set +x 00:07:17.574 14:47:50 -- accel/accel.sh@102 -- # run_test accel_xor accel_test -t 1 -w xor -y -x 3 00:07:17.574 14:47:50 -- common/autotest_common.sh@1087 -- # '[' 9 -le 1 ']' 00:07:17.574 14:47:50 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:17.574 14:47:50 -- common/autotest_common.sh@10 -- # set +x 00:07:17.574 ************************************ 00:07:17.574 START TEST accel_xor 00:07:17.574 ************************************ 00:07:17.574 14:47:50 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w xor -y -x 3 00:07:17.574 14:47:50 -- accel/accel.sh@16 -- # local accel_opc 00:07:17.574 14:47:50 -- accel/accel.sh@17 -- # local accel_module 00:07:17.574 14:47:50 -- accel/accel.sh@18 -- # accel_perf -t 1 -w xor -y -x 3 00:07:17.574 14:47:50 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x 3 00:07:17.574 14:47:50 -- accel/accel.sh@12 -- # build_accel_config 00:07:17.574 14:47:50 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:17.574 14:47:50 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:17.574 14:47:50 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:17.574 14:47:50 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:17.574 14:47:50 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:17.574 14:47:50 -- accel/accel.sh@41 -- # local IFS=, 00:07:17.574 14:47:50 -- accel/accel.sh@42 -- # jq -r . 00:07:17.833 [2024-12-01 14:47:50.703016] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:17.834 [2024-12-01 14:47:50.703109] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70991 ] 00:07:17.834 [2024-12-01 14:47:50.838927] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:17.834 [2024-12-01 14:47:50.891827] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:19.210 14:47:52 -- accel/accel.sh@18 -- # out=' 00:07:19.210 SPDK Configuration: 00:07:19.210 Core mask: 0x1 00:07:19.210 00:07:19.210 Accel Perf Configuration: 00:07:19.210 Workload Type: xor 00:07:19.210 Source buffers: 3 00:07:19.210 Transfer size: 4096 bytes 00:07:19.210 Vector count 1 00:07:19.210 Module: software 00:07:19.210 Queue depth: 32 00:07:19.210 Allocate depth: 32 00:07:19.210 # threads/core: 1 00:07:19.210 Run time: 1 seconds 00:07:19.210 Verify: Yes 00:07:19.210 00:07:19.210 Running for 1 seconds... 00:07:19.210 00:07:19.210 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:19.210 ------------------------------------------------------------------------------------ 00:07:19.210 0,0 287264/s 1122 MiB/s 0 0 00:07:19.210 ==================================================================================== 00:07:19.210 Total 287264/s 1122 MiB/s 0 0' 00:07:19.210 14:47:52 -- accel/accel.sh@20 -- # IFS=: 00:07:19.210 14:47:52 -- accel/accel.sh@20 -- # read -r var val 00:07:19.210 14:47:52 -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y -x 3 00:07:19.210 14:47:52 -- accel/accel.sh@12 -- # build_accel_config 00:07:19.210 14:47:52 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x 3 00:07:19.210 14:47:52 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:19.210 14:47:52 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:19.210 14:47:52 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:19.210 14:47:52 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:19.210 14:47:52 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:19.210 14:47:52 -- accel/accel.sh@41 -- # local IFS=, 00:07:19.210 14:47:52 -- accel/accel.sh@42 -- # jq -r . 00:07:19.210 [2024-12-01 14:47:52.096085] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:19.210 [2024-12-01 14:47:52.096940] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71005 ] 00:07:19.210 [2024-12-01 14:47:52.234059] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:19.210 [2024-12-01 14:47:52.285597] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:19.469 14:47:52 -- accel/accel.sh@21 -- # val= 00:07:19.469 14:47:52 -- accel/accel.sh@22 -- # case "$var" in 00:07:19.469 14:47:52 -- accel/accel.sh@20 -- # IFS=: 00:07:19.469 14:47:52 -- accel/accel.sh@20 -- # read -r var val 00:07:19.469 14:47:52 -- accel/accel.sh@21 -- # val= 00:07:19.469 14:47:52 -- accel/accel.sh@22 -- # case "$var" in 00:07:19.469 14:47:52 -- accel/accel.sh@20 -- # IFS=: 00:07:19.469 14:47:52 -- accel/accel.sh@20 -- # read -r var val 00:07:19.469 14:47:52 -- accel/accel.sh@21 -- # val=0x1 00:07:19.469 14:47:52 -- accel/accel.sh@22 -- # case "$var" in 00:07:19.469 14:47:52 -- accel/accel.sh@20 -- # IFS=: 00:07:19.469 14:47:52 -- accel/accel.sh@20 -- # read -r var val 00:07:19.469 14:47:52 -- accel/accel.sh@21 -- # val= 00:07:19.469 14:47:52 -- accel/accel.sh@22 -- # case "$var" in 00:07:19.469 14:47:52 -- accel/accel.sh@20 -- # IFS=: 00:07:19.469 14:47:52 -- accel/accel.sh@20 -- # read -r var val 00:07:19.469 14:47:52 -- accel/accel.sh@21 -- # val= 00:07:19.469 14:47:52 -- accel/accel.sh@22 -- # case "$var" in 00:07:19.469 14:47:52 -- accel/accel.sh@20 -- # IFS=: 00:07:19.469 14:47:52 -- accel/accel.sh@20 -- # read -r var val 00:07:19.469 14:47:52 -- accel/accel.sh@21 -- # val=xor 00:07:19.469 14:47:52 -- accel/accel.sh@22 -- # case "$var" in 00:07:19.469 14:47:52 -- accel/accel.sh@24 -- # accel_opc=xor 00:07:19.469 14:47:52 -- accel/accel.sh@20 -- # IFS=: 00:07:19.469 14:47:52 -- accel/accel.sh@20 -- # read -r var val 00:07:19.469 14:47:52 -- accel/accel.sh@21 -- # val=3 00:07:19.469 14:47:52 -- accel/accel.sh@22 -- # case "$var" in 00:07:19.469 14:47:52 -- accel/accel.sh@20 -- # IFS=: 00:07:19.469 14:47:52 -- accel/accel.sh@20 -- # read -r var val 00:07:19.469 14:47:52 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:19.469 14:47:52 -- accel/accel.sh@22 -- # case "$var" in 00:07:19.469 14:47:52 -- accel/accel.sh@20 -- # IFS=: 00:07:19.469 14:47:52 -- accel/accel.sh@20 -- # read -r var val 00:07:19.469 14:47:52 -- accel/accel.sh@21 -- # val= 00:07:19.469 14:47:52 -- accel/accel.sh@22 -- # case "$var" in 00:07:19.469 14:47:52 -- accel/accel.sh@20 -- # IFS=: 00:07:19.469 14:47:52 -- accel/accel.sh@20 -- # read -r var val 00:07:19.469 14:47:52 -- accel/accel.sh@21 -- # val=software 00:07:19.469 14:47:52 -- accel/accel.sh@22 -- # case "$var" in 00:07:19.469 14:47:52 -- accel/accel.sh@23 -- # accel_module=software 00:07:19.469 14:47:52 -- accel/accel.sh@20 -- # IFS=: 00:07:19.469 14:47:52 -- accel/accel.sh@20 -- # read -r var val 00:07:19.469 14:47:52 -- accel/accel.sh@21 -- # val=32 00:07:19.469 14:47:52 -- accel/accel.sh@22 -- # case "$var" in 00:07:19.469 14:47:52 -- accel/accel.sh@20 -- # IFS=: 00:07:19.469 14:47:52 -- accel/accel.sh@20 -- # read -r var val 00:07:19.469 14:47:52 -- accel/accel.sh@21 -- # val=32 00:07:19.469 14:47:52 -- accel/accel.sh@22 -- # case "$var" in 00:07:19.469 14:47:52 -- accel/accel.sh@20 -- # IFS=: 00:07:19.469 14:47:52 -- accel/accel.sh@20 -- # read -r var val 00:07:19.469 14:47:52 -- accel/accel.sh@21 -- # val=1 00:07:19.469 14:47:52 -- accel/accel.sh@22 -- # case "$var" in 00:07:19.469 14:47:52 -- accel/accel.sh@20 -- # IFS=: 00:07:19.469 14:47:52 -- accel/accel.sh@20 -- # read -r var val 00:07:19.469 14:47:52 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:19.469 14:47:52 -- accel/accel.sh@22 -- # case "$var" in 00:07:19.469 14:47:52 -- accel/accel.sh@20 -- # IFS=: 00:07:19.469 14:47:52 -- accel/accel.sh@20 -- # read -r var val 00:07:19.469 14:47:52 -- accel/accel.sh@21 -- # val=Yes 00:07:19.469 14:47:52 -- accel/accel.sh@22 -- # case "$var" in 00:07:19.469 14:47:52 -- accel/accel.sh@20 -- # IFS=: 00:07:19.469 14:47:52 -- accel/accel.sh@20 -- # read -r var val 00:07:19.469 14:47:52 -- accel/accel.sh@21 -- # val= 00:07:19.469 14:47:52 -- accel/accel.sh@22 -- # case "$var" in 00:07:19.469 14:47:52 -- accel/accel.sh@20 -- # IFS=: 00:07:19.469 14:47:52 -- accel/accel.sh@20 -- # read -r var val 00:07:19.469 14:47:52 -- accel/accel.sh@21 -- # val= 00:07:19.469 14:47:52 -- accel/accel.sh@22 -- # case "$var" in 00:07:19.469 14:47:52 -- accel/accel.sh@20 -- # IFS=: 00:07:19.469 14:47:52 -- accel/accel.sh@20 -- # read -r var val 00:07:20.406 14:47:53 -- accel/accel.sh@21 -- # val= 00:07:20.406 14:47:53 -- accel/accel.sh@22 -- # case "$var" in 00:07:20.406 14:47:53 -- accel/accel.sh@20 -- # IFS=: 00:07:20.406 14:47:53 -- accel/accel.sh@20 -- # read -r var val 00:07:20.406 14:47:53 -- accel/accel.sh@21 -- # val= 00:07:20.406 14:47:53 -- accel/accel.sh@22 -- # case "$var" in 00:07:20.406 14:47:53 -- accel/accel.sh@20 -- # IFS=: 00:07:20.406 14:47:53 -- accel/accel.sh@20 -- # read -r var val 00:07:20.406 14:47:53 -- accel/accel.sh@21 -- # val= 00:07:20.406 14:47:53 -- accel/accel.sh@22 -- # case "$var" in 00:07:20.406 14:47:53 -- accel/accel.sh@20 -- # IFS=: 00:07:20.406 14:47:53 -- accel/accel.sh@20 -- # read -r var val 00:07:20.406 14:47:53 -- accel/accel.sh@21 -- # val= 00:07:20.406 14:47:53 -- accel/accel.sh@22 -- # case "$var" in 00:07:20.406 14:47:53 -- accel/accel.sh@20 -- # IFS=: 00:07:20.406 14:47:53 -- accel/accel.sh@20 -- # read -r var val 00:07:20.406 14:47:53 -- accel/accel.sh@21 -- # val= 00:07:20.406 14:47:53 -- accel/accel.sh@22 -- # case "$var" in 00:07:20.406 14:47:53 -- accel/accel.sh@20 -- # IFS=: 00:07:20.406 14:47:53 -- accel/accel.sh@20 -- # read -r var val 00:07:20.406 14:47:53 -- accel/accel.sh@21 -- # val= 00:07:20.406 14:47:53 -- accel/accel.sh@22 -- # case "$var" in 00:07:20.406 14:47:53 -- accel/accel.sh@20 -- # IFS=: 00:07:20.406 14:47:53 -- accel/accel.sh@20 -- # read -r var val 00:07:20.406 14:47:53 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:20.406 14:47:53 -- accel/accel.sh@28 -- # [[ -n xor ]] 00:07:20.406 14:47:53 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:20.406 00:07:20.406 real 0m2.813s 00:07:20.406 user 0m2.398s 00:07:20.406 sys 0m0.210s 00:07:20.406 14:47:53 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:20.406 ************************************ 00:07:20.406 END TEST accel_xor 00:07:20.406 ************************************ 00:07:20.406 14:47:53 -- common/autotest_common.sh@10 -- # set +x 00:07:20.666 14:47:53 -- accel/accel.sh@103 -- # run_test accel_dif_verify accel_test -t 1 -w dif_verify 00:07:20.666 14:47:53 -- common/autotest_common.sh@1087 -- # '[' 6 -le 1 ']' 00:07:20.666 14:47:53 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:20.666 14:47:53 -- common/autotest_common.sh@10 -- # set +x 00:07:20.666 ************************************ 00:07:20.666 START TEST accel_dif_verify 00:07:20.666 ************************************ 00:07:20.666 14:47:53 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w dif_verify 00:07:20.666 14:47:53 -- accel/accel.sh@16 -- # local accel_opc 00:07:20.666 14:47:53 -- accel/accel.sh@17 -- # local accel_module 00:07:20.666 14:47:53 -- accel/accel.sh@18 -- # accel_perf -t 1 -w dif_verify 00:07:20.666 14:47:53 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_verify 00:07:20.666 14:47:53 -- accel/accel.sh@12 -- # build_accel_config 00:07:20.666 14:47:53 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:20.666 14:47:53 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:20.666 14:47:53 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:20.666 14:47:53 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:20.666 14:47:53 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:20.666 14:47:53 -- accel/accel.sh@41 -- # local IFS=, 00:07:20.666 14:47:53 -- accel/accel.sh@42 -- # jq -r . 00:07:20.666 [2024-12-01 14:47:53.564983] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:20.666 [2024-12-01 14:47:53.565071] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71048 ] 00:07:20.666 [2024-12-01 14:47:53.701015] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:20.666 [2024-12-01 14:47:53.750519] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:22.044 14:47:54 -- accel/accel.sh@18 -- # out=' 00:07:22.044 SPDK Configuration: 00:07:22.044 Core mask: 0x1 00:07:22.044 00:07:22.044 Accel Perf Configuration: 00:07:22.044 Workload Type: dif_verify 00:07:22.044 Vector size: 4096 bytes 00:07:22.044 Transfer size: 4096 bytes 00:07:22.044 Block size: 512 bytes 00:07:22.044 Metadata size: 8 bytes 00:07:22.044 Vector count 1 00:07:22.044 Module: software 00:07:22.044 Queue depth: 32 00:07:22.044 Allocate depth: 32 00:07:22.044 # threads/core: 1 00:07:22.044 Run time: 1 seconds 00:07:22.044 Verify: No 00:07:22.044 00:07:22.044 Running for 1 seconds... 00:07:22.044 00:07:22.044 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:22.044 ------------------------------------------------------------------------------------ 00:07:22.044 0,0 126464/s 501 MiB/s 0 0 00:07:22.044 ==================================================================================== 00:07:22.044 Total 126464/s 494 MiB/s 0 0' 00:07:22.044 14:47:54 -- accel/accel.sh@20 -- # IFS=: 00:07:22.044 14:47:54 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_verify 00:07:22.044 14:47:54 -- accel/accel.sh@20 -- # read -r var val 00:07:22.044 14:47:54 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_verify 00:07:22.044 14:47:54 -- accel/accel.sh@12 -- # build_accel_config 00:07:22.044 14:47:54 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:22.044 14:47:54 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:22.044 14:47:54 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:22.044 14:47:54 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:22.044 14:47:54 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:22.044 14:47:54 -- accel/accel.sh@41 -- # local IFS=, 00:07:22.044 14:47:54 -- accel/accel.sh@42 -- # jq -r . 00:07:22.044 [2024-12-01 14:47:54.954042] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:22.044 [2024-12-01 14:47:54.954465] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71062 ] 00:07:22.044 [2024-12-01 14:47:55.090710] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:22.044 [2024-12-01 14:47:55.139895] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:22.302 14:47:55 -- accel/accel.sh@21 -- # val= 00:07:22.302 14:47:55 -- accel/accel.sh@22 -- # case "$var" in 00:07:22.302 14:47:55 -- accel/accel.sh@20 -- # IFS=: 00:07:22.302 14:47:55 -- accel/accel.sh@20 -- # read -r var val 00:07:22.302 14:47:55 -- accel/accel.sh@21 -- # val= 00:07:22.302 14:47:55 -- accel/accel.sh@22 -- # case "$var" in 00:07:22.302 14:47:55 -- accel/accel.sh@20 -- # IFS=: 00:07:22.302 14:47:55 -- accel/accel.sh@20 -- # read -r var val 00:07:22.302 14:47:55 -- accel/accel.sh@21 -- # val=0x1 00:07:22.302 14:47:55 -- accel/accel.sh@22 -- # case "$var" in 00:07:22.302 14:47:55 -- accel/accel.sh@20 -- # IFS=: 00:07:22.302 14:47:55 -- accel/accel.sh@20 -- # read -r var val 00:07:22.302 14:47:55 -- accel/accel.sh@21 -- # val= 00:07:22.302 14:47:55 -- accel/accel.sh@22 -- # case "$var" in 00:07:22.302 14:47:55 -- accel/accel.sh@20 -- # IFS=: 00:07:22.302 14:47:55 -- accel/accel.sh@20 -- # read -r var val 00:07:22.302 14:47:55 -- accel/accel.sh@21 -- # val= 00:07:22.302 14:47:55 -- accel/accel.sh@22 -- # case "$var" in 00:07:22.302 14:47:55 -- accel/accel.sh@20 -- # IFS=: 00:07:22.302 14:47:55 -- accel/accel.sh@20 -- # read -r var val 00:07:22.302 14:47:55 -- accel/accel.sh@21 -- # val=dif_verify 00:07:22.302 14:47:55 -- accel/accel.sh@22 -- # case "$var" in 00:07:22.302 14:47:55 -- accel/accel.sh@24 -- # accel_opc=dif_verify 00:07:22.302 14:47:55 -- accel/accel.sh@20 -- # IFS=: 00:07:22.302 14:47:55 -- accel/accel.sh@20 -- # read -r var val 00:07:22.302 14:47:55 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:22.303 14:47:55 -- accel/accel.sh@22 -- # case "$var" in 00:07:22.303 14:47:55 -- accel/accel.sh@20 -- # IFS=: 00:07:22.303 14:47:55 -- accel/accel.sh@20 -- # read -r var val 00:07:22.303 14:47:55 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:22.303 14:47:55 -- accel/accel.sh@22 -- # case "$var" in 00:07:22.303 14:47:55 -- accel/accel.sh@20 -- # IFS=: 00:07:22.303 14:47:55 -- accel/accel.sh@20 -- # read -r var val 00:07:22.303 14:47:55 -- accel/accel.sh@21 -- # val='512 bytes' 00:07:22.303 14:47:55 -- accel/accel.sh@22 -- # case "$var" in 00:07:22.303 14:47:55 -- accel/accel.sh@20 -- # IFS=: 00:07:22.303 14:47:55 -- accel/accel.sh@20 -- # read -r var val 00:07:22.303 14:47:55 -- accel/accel.sh@21 -- # val='8 bytes' 00:07:22.303 14:47:55 -- accel/accel.sh@22 -- # case "$var" in 00:07:22.303 14:47:55 -- accel/accel.sh@20 -- # IFS=: 00:07:22.303 14:47:55 -- accel/accel.sh@20 -- # read -r var val 00:07:22.303 14:47:55 -- accel/accel.sh@21 -- # val= 00:07:22.303 14:47:55 -- accel/accel.sh@22 -- # case "$var" in 00:07:22.303 14:47:55 -- accel/accel.sh@20 -- # IFS=: 00:07:22.303 14:47:55 -- accel/accel.sh@20 -- # read -r var val 00:07:22.303 14:47:55 -- accel/accel.sh@21 -- # val=software 00:07:22.303 14:47:55 -- accel/accel.sh@22 -- # case "$var" in 00:07:22.303 14:47:55 -- accel/accel.sh@23 -- # accel_module=software 00:07:22.303 14:47:55 -- accel/accel.sh@20 -- # IFS=: 00:07:22.303 14:47:55 -- accel/accel.sh@20 -- # read -r var val 00:07:22.303 14:47:55 -- accel/accel.sh@21 -- # val=32 00:07:22.303 14:47:55 -- accel/accel.sh@22 -- # case "$var" in 00:07:22.303 14:47:55 -- accel/accel.sh@20 -- # IFS=: 00:07:22.303 14:47:55 -- accel/accel.sh@20 -- # read -r var val 00:07:22.303 14:47:55 -- accel/accel.sh@21 -- # val=32 00:07:22.303 14:47:55 -- accel/accel.sh@22 -- # case "$var" in 00:07:22.303 14:47:55 -- accel/accel.sh@20 -- # IFS=: 00:07:22.303 14:47:55 -- accel/accel.sh@20 -- # read -r var val 00:07:22.303 14:47:55 -- accel/accel.sh@21 -- # val=1 00:07:22.303 14:47:55 -- accel/accel.sh@22 -- # case "$var" in 00:07:22.303 14:47:55 -- accel/accel.sh@20 -- # IFS=: 00:07:22.303 14:47:55 -- accel/accel.sh@20 -- # read -r var val 00:07:22.303 14:47:55 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:22.303 14:47:55 -- accel/accel.sh@22 -- # case "$var" in 00:07:22.303 14:47:55 -- accel/accel.sh@20 -- # IFS=: 00:07:22.303 14:47:55 -- accel/accel.sh@20 -- # read -r var val 00:07:22.303 14:47:55 -- accel/accel.sh@21 -- # val=No 00:07:22.303 14:47:55 -- accel/accel.sh@22 -- # case "$var" in 00:07:22.303 14:47:55 -- accel/accel.sh@20 -- # IFS=: 00:07:22.303 14:47:55 -- accel/accel.sh@20 -- # read -r var val 00:07:22.303 14:47:55 -- accel/accel.sh@21 -- # val= 00:07:22.303 14:47:55 -- accel/accel.sh@22 -- # case "$var" in 00:07:22.303 14:47:55 -- accel/accel.sh@20 -- # IFS=: 00:07:22.303 14:47:55 -- accel/accel.sh@20 -- # read -r var val 00:07:22.303 14:47:55 -- accel/accel.sh@21 -- # val= 00:07:22.303 14:47:55 -- accel/accel.sh@22 -- # case "$var" in 00:07:22.303 14:47:55 -- accel/accel.sh@20 -- # IFS=: 00:07:22.303 14:47:55 -- accel/accel.sh@20 -- # read -r var val 00:07:23.238 14:47:56 -- accel/accel.sh@21 -- # val= 00:07:23.238 14:47:56 -- accel/accel.sh@22 -- # case "$var" in 00:07:23.238 14:47:56 -- accel/accel.sh@20 -- # IFS=: 00:07:23.238 14:47:56 -- accel/accel.sh@20 -- # read -r var val 00:07:23.238 14:47:56 -- accel/accel.sh@21 -- # val= 00:07:23.238 14:47:56 -- accel/accel.sh@22 -- # case "$var" in 00:07:23.238 14:47:56 -- accel/accel.sh@20 -- # IFS=: 00:07:23.238 14:47:56 -- accel/accel.sh@20 -- # read -r var val 00:07:23.238 14:47:56 -- accel/accel.sh@21 -- # val= 00:07:23.238 14:47:56 -- accel/accel.sh@22 -- # case "$var" in 00:07:23.238 14:47:56 -- accel/accel.sh@20 -- # IFS=: 00:07:23.238 14:47:56 -- accel/accel.sh@20 -- # read -r var val 00:07:23.238 14:47:56 -- accel/accel.sh@21 -- # val= 00:07:23.238 14:47:56 -- accel/accel.sh@22 -- # case "$var" in 00:07:23.238 14:47:56 -- accel/accel.sh@20 -- # IFS=: 00:07:23.238 14:47:56 -- accel/accel.sh@20 -- # read -r var val 00:07:23.238 14:47:56 -- accel/accel.sh@21 -- # val= 00:07:23.238 14:47:56 -- accel/accel.sh@22 -- # case "$var" in 00:07:23.238 14:47:56 -- accel/accel.sh@20 -- # IFS=: 00:07:23.238 14:47:56 -- accel/accel.sh@20 -- # read -r var val 00:07:23.239 14:47:56 -- accel/accel.sh@21 -- # val= 00:07:23.239 14:47:56 -- accel/accel.sh@22 -- # case "$var" in 00:07:23.239 14:47:56 -- accel/accel.sh@20 -- # IFS=: 00:07:23.239 14:47:56 -- accel/accel.sh@20 -- # read -r var val 00:07:23.239 14:47:56 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:23.239 14:47:56 -- accel/accel.sh@28 -- # [[ -n dif_verify ]] 00:07:23.239 14:47:56 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:23.239 00:07:23.239 real 0m2.780s 00:07:23.239 user 0m2.377s 00:07:23.239 sys 0m0.207s 00:07:23.239 14:47:56 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:23.239 14:47:56 -- common/autotest_common.sh@10 -- # set +x 00:07:23.239 ************************************ 00:07:23.239 END TEST accel_dif_verify 00:07:23.239 ************************************ 00:07:23.498 14:47:56 -- accel/accel.sh@104 -- # run_test accel_dif_generate accel_test -t 1 -w dif_generate 00:07:23.498 14:47:56 -- common/autotest_common.sh@1087 -- # '[' 6 -le 1 ']' 00:07:23.498 14:47:56 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:23.498 14:47:56 -- common/autotest_common.sh@10 -- # set +x 00:07:23.498 ************************************ 00:07:23.498 START TEST accel_dif_generate 00:07:23.498 ************************************ 00:07:23.498 14:47:56 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w dif_generate 00:07:23.498 14:47:56 -- accel/accel.sh@16 -- # local accel_opc 00:07:23.498 14:47:56 -- accel/accel.sh@17 -- # local accel_module 00:07:23.498 14:47:56 -- accel/accel.sh@18 -- # accel_perf -t 1 -w dif_generate 00:07:23.498 14:47:56 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate 00:07:23.498 14:47:56 -- accel/accel.sh@12 -- # build_accel_config 00:07:23.498 14:47:56 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:23.498 14:47:56 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:23.498 14:47:56 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:23.498 14:47:56 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:23.498 14:47:56 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:23.498 14:47:56 -- accel/accel.sh@41 -- # local IFS=, 00:07:23.498 14:47:56 -- accel/accel.sh@42 -- # jq -r . 00:07:23.498 [2024-12-01 14:47:56.399572] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:23.498 [2024-12-01 14:47:56.399664] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71095 ] 00:07:23.498 [2024-12-01 14:47:56.534740] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:23.498 [2024-12-01 14:47:56.585246] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:24.875 14:47:57 -- accel/accel.sh@18 -- # out=' 00:07:24.875 SPDK Configuration: 00:07:24.875 Core mask: 0x1 00:07:24.875 00:07:24.875 Accel Perf Configuration: 00:07:24.875 Workload Type: dif_generate 00:07:24.875 Vector size: 4096 bytes 00:07:24.875 Transfer size: 4096 bytes 00:07:24.875 Block size: 512 bytes 00:07:24.875 Metadata size: 8 bytes 00:07:24.875 Vector count 1 00:07:24.875 Module: software 00:07:24.875 Queue depth: 32 00:07:24.875 Allocate depth: 32 00:07:24.875 # threads/core: 1 00:07:24.875 Run time: 1 seconds 00:07:24.875 Verify: No 00:07:24.875 00:07:24.875 Running for 1 seconds... 00:07:24.875 00:07:24.875 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:24.875 ------------------------------------------------------------------------------------ 00:07:24.875 0,0 154912/s 614 MiB/s 0 0 00:07:24.875 ==================================================================================== 00:07:24.875 Total 154912/s 605 MiB/s 0 0' 00:07:24.875 14:47:57 -- accel/accel.sh@20 -- # IFS=: 00:07:24.875 14:47:57 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate 00:07:24.875 14:47:57 -- accel/accel.sh@20 -- # read -r var val 00:07:24.875 14:47:57 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate 00:07:24.875 14:47:57 -- accel/accel.sh@12 -- # build_accel_config 00:07:24.875 14:47:57 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:24.875 14:47:57 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:24.875 14:47:57 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:24.875 14:47:57 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:24.875 14:47:57 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:24.875 14:47:57 -- accel/accel.sh@41 -- # local IFS=, 00:07:24.875 14:47:57 -- accel/accel.sh@42 -- # jq -r . 00:07:24.875 [2024-12-01 14:47:57.790158] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:24.875 [2024-12-01 14:47:57.790586] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71116 ] 00:07:24.875 [2024-12-01 14:47:57.926686] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:24.875 [2024-12-01 14:47:57.975222] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:25.135 14:47:58 -- accel/accel.sh@21 -- # val= 00:07:25.135 14:47:58 -- accel/accel.sh@22 -- # case "$var" in 00:07:25.135 14:47:58 -- accel/accel.sh@20 -- # IFS=: 00:07:25.135 14:47:58 -- accel/accel.sh@20 -- # read -r var val 00:07:25.135 14:47:58 -- accel/accel.sh@21 -- # val= 00:07:25.135 14:47:58 -- accel/accel.sh@22 -- # case "$var" in 00:07:25.135 14:47:58 -- accel/accel.sh@20 -- # IFS=: 00:07:25.135 14:47:58 -- accel/accel.sh@20 -- # read -r var val 00:07:25.135 14:47:58 -- accel/accel.sh@21 -- # val=0x1 00:07:25.135 14:47:58 -- accel/accel.sh@22 -- # case "$var" in 00:07:25.135 14:47:58 -- accel/accel.sh@20 -- # IFS=: 00:07:25.135 14:47:58 -- accel/accel.sh@20 -- # read -r var val 00:07:25.135 14:47:58 -- accel/accel.sh@21 -- # val= 00:07:25.135 14:47:58 -- accel/accel.sh@22 -- # case "$var" in 00:07:25.135 14:47:58 -- accel/accel.sh@20 -- # IFS=: 00:07:25.135 14:47:58 -- accel/accel.sh@20 -- # read -r var val 00:07:25.135 14:47:58 -- accel/accel.sh@21 -- # val= 00:07:25.135 14:47:58 -- accel/accel.sh@22 -- # case "$var" in 00:07:25.135 14:47:58 -- accel/accel.sh@20 -- # IFS=: 00:07:25.135 14:47:58 -- accel/accel.sh@20 -- # read -r var val 00:07:25.135 14:47:58 -- accel/accel.sh@21 -- # val=dif_generate 00:07:25.135 14:47:58 -- accel/accel.sh@22 -- # case "$var" in 00:07:25.135 14:47:58 -- accel/accel.sh@24 -- # accel_opc=dif_generate 00:07:25.135 14:47:58 -- accel/accel.sh@20 -- # IFS=: 00:07:25.135 14:47:58 -- accel/accel.sh@20 -- # read -r var val 00:07:25.135 14:47:58 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:25.135 14:47:58 -- accel/accel.sh@22 -- # case "$var" in 00:07:25.135 14:47:58 -- accel/accel.sh@20 -- # IFS=: 00:07:25.135 14:47:58 -- accel/accel.sh@20 -- # read -r var val 00:07:25.135 14:47:58 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:25.135 14:47:58 -- accel/accel.sh@22 -- # case "$var" in 00:07:25.135 14:47:58 -- accel/accel.sh@20 -- # IFS=: 00:07:25.135 14:47:58 -- accel/accel.sh@20 -- # read -r var val 00:07:25.135 14:47:58 -- accel/accel.sh@21 -- # val='512 bytes' 00:07:25.135 14:47:58 -- accel/accel.sh@22 -- # case "$var" in 00:07:25.135 14:47:58 -- accel/accel.sh@20 -- # IFS=: 00:07:25.135 14:47:58 -- accel/accel.sh@20 -- # read -r var val 00:07:25.135 14:47:58 -- accel/accel.sh@21 -- # val='8 bytes' 00:07:25.136 14:47:58 -- accel/accel.sh@22 -- # case "$var" in 00:07:25.136 14:47:58 -- accel/accel.sh@20 -- # IFS=: 00:07:25.136 14:47:58 -- accel/accel.sh@20 -- # read -r var val 00:07:25.136 14:47:58 -- accel/accel.sh@21 -- # val= 00:07:25.136 14:47:58 -- accel/accel.sh@22 -- # case "$var" in 00:07:25.136 14:47:58 -- accel/accel.sh@20 -- # IFS=: 00:07:25.136 14:47:58 -- accel/accel.sh@20 -- # read -r var val 00:07:25.136 14:47:58 -- accel/accel.sh@21 -- # val=software 00:07:25.136 14:47:58 -- accel/accel.sh@22 -- # case "$var" in 00:07:25.136 14:47:58 -- accel/accel.sh@23 -- # accel_module=software 00:07:25.136 14:47:58 -- accel/accel.sh@20 -- # IFS=: 00:07:25.136 14:47:58 -- accel/accel.sh@20 -- # read -r var val 00:07:25.136 14:47:58 -- accel/accel.sh@21 -- # val=32 00:07:25.136 14:47:58 -- accel/accel.sh@22 -- # case "$var" in 00:07:25.136 14:47:58 -- accel/accel.sh@20 -- # IFS=: 00:07:25.136 14:47:58 -- accel/accel.sh@20 -- # read -r var val 00:07:25.136 14:47:58 -- accel/accel.sh@21 -- # val=32 00:07:25.136 14:47:58 -- accel/accel.sh@22 -- # case "$var" in 00:07:25.136 14:47:58 -- accel/accel.sh@20 -- # IFS=: 00:07:25.136 14:47:58 -- accel/accel.sh@20 -- # read -r var val 00:07:25.136 14:47:58 -- accel/accel.sh@21 -- # val=1 00:07:25.136 14:47:58 -- accel/accel.sh@22 -- # case "$var" in 00:07:25.136 14:47:58 -- accel/accel.sh@20 -- # IFS=: 00:07:25.136 14:47:58 -- accel/accel.sh@20 -- # read -r var val 00:07:25.136 14:47:58 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:25.136 14:47:58 -- accel/accel.sh@22 -- # case "$var" in 00:07:25.136 14:47:58 -- accel/accel.sh@20 -- # IFS=: 00:07:25.136 14:47:58 -- accel/accel.sh@20 -- # read -r var val 00:07:25.136 14:47:58 -- accel/accel.sh@21 -- # val=No 00:07:25.136 14:47:58 -- accel/accel.sh@22 -- # case "$var" in 00:07:25.136 14:47:58 -- accel/accel.sh@20 -- # IFS=: 00:07:25.136 14:47:58 -- accel/accel.sh@20 -- # read -r var val 00:07:25.136 14:47:58 -- accel/accel.sh@21 -- # val= 00:07:25.136 14:47:58 -- accel/accel.sh@22 -- # case "$var" in 00:07:25.136 14:47:58 -- accel/accel.sh@20 -- # IFS=: 00:07:25.136 14:47:58 -- accel/accel.sh@20 -- # read -r var val 00:07:25.136 14:47:58 -- accel/accel.sh@21 -- # val= 00:07:25.136 14:47:58 -- accel/accel.sh@22 -- # case "$var" in 00:07:25.136 14:47:58 -- accel/accel.sh@20 -- # IFS=: 00:07:25.136 14:47:58 -- accel/accel.sh@20 -- # read -r var val 00:07:26.071 14:47:59 -- accel/accel.sh@21 -- # val= 00:07:26.071 14:47:59 -- accel/accel.sh@22 -- # case "$var" in 00:07:26.071 14:47:59 -- accel/accel.sh@20 -- # IFS=: 00:07:26.071 14:47:59 -- accel/accel.sh@20 -- # read -r var val 00:07:26.071 14:47:59 -- accel/accel.sh@21 -- # val= 00:07:26.071 14:47:59 -- accel/accel.sh@22 -- # case "$var" in 00:07:26.071 14:47:59 -- accel/accel.sh@20 -- # IFS=: 00:07:26.071 14:47:59 -- accel/accel.sh@20 -- # read -r var val 00:07:26.071 14:47:59 -- accel/accel.sh@21 -- # val= 00:07:26.071 14:47:59 -- accel/accel.sh@22 -- # case "$var" in 00:07:26.071 14:47:59 -- accel/accel.sh@20 -- # IFS=: 00:07:26.071 14:47:59 -- accel/accel.sh@20 -- # read -r var val 00:07:26.071 14:47:59 -- accel/accel.sh@21 -- # val= 00:07:26.071 ************************************ 00:07:26.071 END TEST accel_dif_generate 00:07:26.071 ************************************ 00:07:26.071 14:47:59 -- accel/accel.sh@22 -- # case "$var" in 00:07:26.071 14:47:59 -- accel/accel.sh@20 -- # IFS=: 00:07:26.071 14:47:59 -- accel/accel.sh@20 -- # read -r var val 00:07:26.071 14:47:59 -- accel/accel.sh@21 -- # val= 00:07:26.071 14:47:59 -- accel/accel.sh@22 -- # case "$var" in 00:07:26.071 14:47:59 -- accel/accel.sh@20 -- # IFS=: 00:07:26.071 14:47:59 -- accel/accel.sh@20 -- # read -r var val 00:07:26.071 14:47:59 -- accel/accel.sh@21 -- # val= 00:07:26.071 14:47:59 -- accel/accel.sh@22 -- # case "$var" in 00:07:26.071 14:47:59 -- accel/accel.sh@20 -- # IFS=: 00:07:26.071 14:47:59 -- accel/accel.sh@20 -- # read -r var val 00:07:26.071 14:47:59 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:26.071 14:47:59 -- accel/accel.sh@28 -- # [[ -n dif_generate ]] 00:07:26.071 14:47:59 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:26.071 00:07:26.071 real 0m2.786s 00:07:26.071 user 0m2.372s 00:07:26.071 sys 0m0.212s 00:07:26.071 14:47:59 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:26.071 14:47:59 -- common/autotest_common.sh@10 -- # set +x 00:07:26.331 14:47:59 -- accel/accel.sh@105 -- # run_test accel_dif_generate_copy accel_test -t 1 -w dif_generate_copy 00:07:26.331 14:47:59 -- common/autotest_common.sh@1087 -- # '[' 6 -le 1 ']' 00:07:26.331 14:47:59 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:26.331 14:47:59 -- common/autotest_common.sh@10 -- # set +x 00:07:26.331 ************************************ 00:07:26.331 START TEST accel_dif_generate_copy 00:07:26.331 ************************************ 00:07:26.331 14:47:59 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w dif_generate_copy 00:07:26.331 14:47:59 -- accel/accel.sh@16 -- # local accel_opc 00:07:26.331 14:47:59 -- accel/accel.sh@17 -- # local accel_module 00:07:26.331 14:47:59 -- accel/accel.sh@18 -- # accel_perf -t 1 -w dif_generate_copy 00:07:26.331 14:47:59 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate_copy 00:07:26.331 14:47:59 -- accel/accel.sh@12 -- # build_accel_config 00:07:26.331 14:47:59 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:26.331 14:47:59 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:26.331 14:47:59 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:26.331 14:47:59 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:26.331 14:47:59 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:26.331 14:47:59 -- accel/accel.sh@41 -- # local IFS=, 00:07:26.331 14:47:59 -- accel/accel.sh@42 -- # jq -r . 00:07:26.331 [2024-12-01 14:47:59.239029] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:26.331 [2024-12-01 14:47:59.239127] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71145 ] 00:07:26.331 [2024-12-01 14:47:59.374973] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:26.331 [2024-12-01 14:47:59.426069] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:27.709 14:48:00 -- accel/accel.sh@18 -- # out=' 00:07:27.709 SPDK Configuration: 00:07:27.709 Core mask: 0x1 00:07:27.709 00:07:27.709 Accel Perf Configuration: 00:07:27.709 Workload Type: dif_generate_copy 00:07:27.709 Vector size: 4096 bytes 00:07:27.709 Transfer size: 4096 bytes 00:07:27.709 Vector count 1 00:07:27.709 Module: software 00:07:27.709 Queue depth: 32 00:07:27.709 Allocate depth: 32 00:07:27.709 # threads/core: 1 00:07:27.709 Run time: 1 seconds 00:07:27.709 Verify: No 00:07:27.709 00:07:27.709 Running for 1 seconds... 00:07:27.709 00:07:27.709 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:27.709 ------------------------------------------------------------------------------------ 00:07:27.709 0,0 118336/s 469 MiB/s 0 0 00:07:27.709 ==================================================================================== 00:07:27.709 Total 118336/s 462 MiB/s 0 0' 00:07:27.709 14:48:00 -- accel/accel.sh@20 -- # IFS=: 00:07:27.709 14:48:00 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate_copy 00:07:27.709 14:48:00 -- accel/accel.sh@20 -- # read -r var val 00:07:27.709 14:48:00 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate_copy 00:07:27.709 14:48:00 -- accel/accel.sh@12 -- # build_accel_config 00:07:27.709 14:48:00 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:27.709 14:48:00 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:27.709 14:48:00 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:27.709 14:48:00 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:27.709 14:48:00 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:27.709 14:48:00 -- accel/accel.sh@41 -- # local IFS=, 00:07:27.709 14:48:00 -- accel/accel.sh@42 -- # jq -r . 00:07:27.709 [2024-12-01 14:48:00.631463] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:27.709 [2024-12-01 14:48:00.631715] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71170 ] 00:07:27.709 [2024-12-01 14:48:00.767267] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:27.972 [2024-12-01 14:48:00.826927] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:27.972 14:48:00 -- accel/accel.sh@21 -- # val= 00:07:27.972 14:48:00 -- accel/accel.sh@22 -- # case "$var" in 00:07:27.972 14:48:00 -- accel/accel.sh@20 -- # IFS=: 00:07:27.972 14:48:00 -- accel/accel.sh@20 -- # read -r var val 00:07:27.972 14:48:00 -- accel/accel.sh@21 -- # val= 00:07:27.972 14:48:00 -- accel/accel.sh@22 -- # case "$var" in 00:07:27.972 14:48:00 -- accel/accel.sh@20 -- # IFS=: 00:07:27.972 14:48:00 -- accel/accel.sh@20 -- # read -r var val 00:07:27.972 14:48:00 -- accel/accel.sh@21 -- # val=0x1 00:07:27.972 14:48:00 -- accel/accel.sh@22 -- # case "$var" in 00:07:27.972 14:48:00 -- accel/accel.sh@20 -- # IFS=: 00:07:27.972 14:48:00 -- accel/accel.sh@20 -- # read -r var val 00:07:27.972 14:48:00 -- accel/accel.sh@21 -- # val= 00:07:27.972 14:48:00 -- accel/accel.sh@22 -- # case "$var" in 00:07:27.972 14:48:00 -- accel/accel.sh@20 -- # IFS=: 00:07:27.972 14:48:00 -- accel/accel.sh@20 -- # read -r var val 00:07:27.972 14:48:00 -- accel/accel.sh@21 -- # val= 00:07:27.972 14:48:00 -- accel/accel.sh@22 -- # case "$var" in 00:07:27.972 14:48:00 -- accel/accel.sh@20 -- # IFS=: 00:07:27.972 14:48:00 -- accel/accel.sh@20 -- # read -r var val 00:07:27.972 14:48:00 -- accel/accel.sh@21 -- # val=dif_generate_copy 00:07:27.972 14:48:00 -- accel/accel.sh@22 -- # case "$var" in 00:07:27.972 14:48:00 -- accel/accel.sh@24 -- # accel_opc=dif_generate_copy 00:07:27.972 14:48:00 -- accel/accel.sh@20 -- # IFS=: 00:07:27.972 14:48:00 -- accel/accel.sh@20 -- # read -r var val 00:07:27.972 14:48:00 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:27.972 14:48:00 -- accel/accel.sh@22 -- # case "$var" in 00:07:27.972 14:48:00 -- accel/accel.sh@20 -- # IFS=: 00:07:27.972 14:48:00 -- accel/accel.sh@20 -- # read -r var val 00:07:27.972 14:48:00 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:27.972 14:48:00 -- accel/accel.sh@22 -- # case "$var" in 00:07:27.972 14:48:00 -- accel/accel.sh@20 -- # IFS=: 00:07:27.972 14:48:00 -- accel/accel.sh@20 -- # read -r var val 00:07:27.972 14:48:00 -- accel/accel.sh@21 -- # val= 00:07:27.972 14:48:00 -- accel/accel.sh@22 -- # case "$var" in 00:07:27.972 14:48:00 -- accel/accel.sh@20 -- # IFS=: 00:07:27.972 14:48:00 -- accel/accel.sh@20 -- # read -r var val 00:07:27.972 14:48:00 -- accel/accel.sh@21 -- # val=software 00:07:27.972 14:48:00 -- accel/accel.sh@22 -- # case "$var" in 00:07:27.972 14:48:00 -- accel/accel.sh@23 -- # accel_module=software 00:07:27.972 14:48:00 -- accel/accel.sh@20 -- # IFS=: 00:07:27.972 14:48:00 -- accel/accel.sh@20 -- # read -r var val 00:07:27.972 14:48:00 -- accel/accel.sh@21 -- # val=32 00:07:27.972 14:48:00 -- accel/accel.sh@22 -- # case "$var" in 00:07:27.972 14:48:00 -- accel/accel.sh@20 -- # IFS=: 00:07:27.972 14:48:00 -- accel/accel.sh@20 -- # read -r var val 00:07:27.972 14:48:00 -- accel/accel.sh@21 -- # val=32 00:07:27.972 14:48:00 -- accel/accel.sh@22 -- # case "$var" in 00:07:27.972 14:48:00 -- accel/accel.sh@20 -- # IFS=: 00:07:27.972 14:48:00 -- accel/accel.sh@20 -- # read -r var val 00:07:27.972 14:48:00 -- accel/accel.sh@21 -- # val=1 00:07:27.972 14:48:00 -- accel/accel.sh@22 -- # case "$var" in 00:07:27.972 14:48:00 -- accel/accel.sh@20 -- # IFS=: 00:07:27.972 14:48:00 -- accel/accel.sh@20 -- # read -r var val 00:07:27.973 14:48:00 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:27.973 14:48:00 -- accel/accel.sh@22 -- # case "$var" in 00:07:27.973 14:48:00 -- accel/accel.sh@20 -- # IFS=: 00:07:27.973 14:48:00 -- accel/accel.sh@20 -- # read -r var val 00:07:27.973 14:48:00 -- accel/accel.sh@21 -- # val=No 00:07:27.973 14:48:00 -- accel/accel.sh@22 -- # case "$var" in 00:07:27.973 14:48:00 -- accel/accel.sh@20 -- # IFS=: 00:07:27.973 14:48:00 -- accel/accel.sh@20 -- # read -r var val 00:07:27.973 14:48:00 -- accel/accel.sh@21 -- # val= 00:07:27.973 14:48:00 -- accel/accel.sh@22 -- # case "$var" in 00:07:27.973 14:48:00 -- accel/accel.sh@20 -- # IFS=: 00:07:27.973 14:48:00 -- accel/accel.sh@20 -- # read -r var val 00:07:27.973 14:48:00 -- accel/accel.sh@21 -- # val= 00:07:27.973 14:48:00 -- accel/accel.sh@22 -- # case "$var" in 00:07:27.973 14:48:00 -- accel/accel.sh@20 -- # IFS=: 00:07:27.973 14:48:00 -- accel/accel.sh@20 -- # read -r var val 00:07:28.907 14:48:02 -- accel/accel.sh@21 -- # val= 00:07:28.907 14:48:02 -- accel/accel.sh@22 -- # case "$var" in 00:07:28.907 14:48:02 -- accel/accel.sh@20 -- # IFS=: 00:07:28.907 14:48:02 -- accel/accel.sh@20 -- # read -r var val 00:07:28.907 14:48:02 -- accel/accel.sh@21 -- # val= 00:07:28.907 14:48:02 -- accel/accel.sh@22 -- # case "$var" in 00:07:28.907 14:48:02 -- accel/accel.sh@20 -- # IFS=: 00:07:28.907 14:48:02 -- accel/accel.sh@20 -- # read -r var val 00:07:28.907 14:48:02 -- accel/accel.sh@21 -- # val= 00:07:28.907 14:48:02 -- accel/accel.sh@22 -- # case "$var" in 00:07:28.907 14:48:02 -- accel/accel.sh@20 -- # IFS=: 00:07:28.907 14:48:02 -- accel/accel.sh@20 -- # read -r var val 00:07:28.907 14:48:02 -- accel/accel.sh@21 -- # val= 00:07:28.907 14:48:02 -- accel/accel.sh@22 -- # case "$var" in 00:07:28.907 14:48:02 -- accel/accel.sh@20 -- # IFS=: 00:07:28.907 14:48:02 -- accel/accel.sh@20 -- # read -r var val 00:07:28.907 14:48:02 -- accel/accel.sh@21 -- # val= 00:07:28.907 14:48:02 -- accel/accel.sh@22 -- # case "$var" in 00:07:28.907 14:48:02 -- accel/accel.sh@20 -- # IFS=: 00:07:28.907 14:48:02 -- accel/accel.sh@20 -- # read -r var val 00:07:28.907 14:48:02 -- accel/accel.sh@21 -- # val= 00:07:28.907 14:48:02 -- accel/accel.sh@22 -- # case "$var" in 00:07:28.907 14:48:02 -- accel/accel.sh@20 -- # IFS=: 00:07:28.907 14:48:02 -- accel/accel.sh@20 -- # read -r var val 00:07:28.907 14:48:02 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:28.907 14:48:02 -- accel/accel.sh@28 -- # [[ -n dif_generate_copy ]] 00:07:28.907 14:48:02 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:28.907 00:07:28.907 real 0m2.807s 00:07:28.907 user 0m2.373s 00:07:28.907 sys 0m0.225s 00:07:28.907 ************************************ 00:07:28.907 END TEST accel_dif_generate_copy 00:07:28.907 ************************************ 00:07:28.907 14:48:02 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:28.907 14:48:02 -- common/autotest_common.sh@10 -- # set +x 00:07:29.165 14:48:02 -- accel/accel.sh@107 -- # [[ y == y ]] 00:07:29.165 14:48:02 -- accel/accel.sh@108 -- # run_test accel_comp accel_test -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:29.165 14:48:02 -- common/autotest_common.sh@1087 -- # '[' 8 -le 1 ']' 00:07:29.165 14:48:02 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:29.165 14:48:02 -- common/autotest_common.sh@10 -- # set +x 00:07:29.165 ************************************ 00:07:29.166 START TEST accel_comp 00:07:29.166 ************************************ 00:07:29.166 14:48:02 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:29.166 14:48:02 -- accel/accel.sh@16 -- # local accel_opc 00:07:29.166 14:48:02 -- accel/accel.sh@17 -- # local accel_module 00:07:29.166 14:48:02 -- accel/accel.sh@18 -- # accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:29.166 14:48:02 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:29.166 14:48:02 -- accel/accel.sh@12 -- # build_accel_config 00:07:29.166 14:48:02 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:29.166 14:48:02 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:29.166 14:48:02 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:29.166 14:48:02 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:29.166 14:48:02 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:29.166 14:48:02 -- accel/accel.sh@41 -- # local IFS=, 00:07:29.166 14:48:02 -- accel/accel.sh@42 -- # jq -r . 00:07:29.166 [2024-12-01 14:48:02.096174] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:29.166 [2024-12-01 14:48:02.096482] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71199 ] 00:07:29.166 [2024-12-01 14:48:02.224382] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:29.431 [2024-12-01 14:48:02.285774] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:30.433 14:48:03 -- accel/accel.sh@18 -- # out='Preparing input file... 00:07:30.433 00:07:30.433 SPDK Configuration: 00:07:30.433 Core mask: 0x1 00:07:30.433 00:07:30.433 Accel Perf Configuration: 00:07:30.433 Workload Type: compress 00:07:30.433 Transfer size: 4096 bytes 00:07:30.433 Vector count 1 00:07:30.433 Module: software 00:07:30.433 File Name: /home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:30.433 Queue depth: 32 00:07:30.433 Allocate depth: 32 00:07:30.433 # threads/core: 1 00:07:30.433 Run time: 1 seconds 00:07:30.433 Verify: No 00:07:30.433 00:07:30.433 Running for 1 seconds... 00:07:30.433 00:07:30.433 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:30.433 ------------------------------------------------------------------------------------ 00:07:30.433 0,0 59200/s 246 MiB/s 0 0 00:07:30.433 ==================================================================================== 00:07:30.433 Total 59200/s 231 MiB/s 0 0' 00:07:30.433 14:48:03 -- accel/accel.sh@20 -- # IFS=: 00:07:30.433 14:48:03 -- accel/accel.sh@15 -- # accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:30.433 14:48:03 -- accel/accel.sh@20 -- # read -r var val 00:07:30.433 14:48:03 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:30.433 14:48:03 -- accel/accel.sh@12 -- # build_accel_config 00:07:30.433 14:48:03 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:30.433 14:48:03 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:30.433 14:48:03 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:30.433 14:48:03 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:30.433 14:48:03 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:30.433 14:48:03 -- accel/accel.sh@41 -- # local IFS=, 00:07:30.433 14:48:03 -- accel/accel.sh@42 -- # jq -r . 00:07:30.433 [2024-12-01 14:48:03.499260] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:30.433 [2024-12-01 14:48:03.499362] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71213 ] 00:07:30.703 [2024-12-01 14:48:03.635393] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:30.703 [2024-12-01 14:48:03.686340] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:30.703 14:48:03 -- accel/accel.sh@21 -- # val= 00:07:30.703 14:48:03 -- accel/accel.sh@22 -- # case "$var" in 00:07:30.703 14:48:03 -- accel/accel.sh@20 -- # IFS=: 00:07:30.703 14:48:03 -- accel/accel.sh@20 -- # read -r var val 00:07:30.703 14:48:03 -- accel/accel.sh@21 -- # val= 00:07:30.703 14:48:03 -- accel/accel.sh@22 -- # case "$var" in 00:07:30.703 14:48:03 -- accel/accel.sh@20 -- # IFS=: 00:07:30.703 14:48:03 -- accel/accel.sh@20 -- # read -r var val 00:07:30.703 14:48:03 -- accel/accel.sh@21 -- # val= 00:07:30.703 14:48:03 -- accel/accel.sh@22 -- # case "$var" in 00:07:30.703 14:48:03 -- accel/accel.sh@20 -- # IFS=: 00:07:30.703 14:48:03 -- accel/accel.sh@20 -- # read -r var val 00:07:30.703 14:48:03 -- accel/accel.sh@21 -- # val=0x1 00:07:30.703 14:48:03 -- accel/accel.sh@22 -- # case "$var" in 00:07:30.703 14:48:03 -- accel/accel.sh@20 -- # IFS=: 00:07:30.703 14:48:03 -- accel/accel.sh@20 -- # read -r var val 00:07:30.703 14:48:03 -- accel/accel.sh@21 -- # val= 00:07:30.703 14:48:03 -- accel/accel.sh@22 -- # case "$var" in 00:07:30.703 14:48:03 -- accel/accel.sh@20 -- # IFS=: 00:07:30.703 14:48:03 -- accel/accel.sh@20 -- # read -r var val 00:07:30.703 14:48:03 -- accel/accel.sh@21 -- # val= 00:07:30.703 14:48:03 -- accel/accel.sh@22 -- # case "$var" in 00:07:30.703 14:48:03 -- accel/accel.sh@20 -- # IFS=: 00:07:30.703 14:48:03 -- accel/accel.sh@20 -- # read -r var val 00:07:30.703 14:48:03 -- accel/accel.sh@21 -- # val=compress 00:07:30.703 14:48:03 -- accel/accel.sh@22 -- # case "$var" in 00:07:30.703 14:48:03 -- accel/accel.sh@24 -- # accel_opc=compress 00:07:30.703 14:48:03 -- accel/accel.sh@20 -- # IFS=: 00:07:30.703 14:48:03 -- accel/accel.sh@20 -- # read -r var val 00:07:30.703 14:48:03 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:30.703 14:48:03 -- accel/accel.sh@22 -- # case "$var" in 00:07:30.703 14:48:03 -- accel/accel.sh@20 -- # IFS=: 00:07:30.703 14:48:03 -- accel/accel.sh@20 -- # read -r var val 00:07:30.703 14:48:03 -- accel/accel.sh@21 -- # val= 00:07:30.703 14:48:03 -- accel/accel.sh@22 -- # case "$var" in 00:07:30.703 14:48:03 -- accel/accel.sh@20 -- # IFS=: 00:07:30.703 14:48:03 -- accel/accel.sh@20 -- # read -r var val 00:07:30.703 14:48:03 -- accel/accel.sh@21 -- # val=software 00:07:30.703 14:48:03 -- accel/accel.sh@22 -- # case "$var" in 00:07:30.703 14:48:03 -- accel/accel.sh@23 -- # accel_module=software 00:07:30.703 14:48:03 -- accel/accel.sh@20 -- # IFS=: 00:07:30.703 14:48:03 -- accel/accel.sh@20 -- # read -r var val 00:07:30.703 14:48:03 -- accel/accel.sh@21 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:30.703 14:48:03 -- accel/accel.sh@22 -- # case "$var" in 00:07:30.703 14:48:03 -- accel/accel.sh@20 -- # IFS=: 00:07:30.703 14:48:03 -- accel/accel.sh@20 -- # read -r var val 00:07:30.703 14:48:03 -- accel/accel.sh@21 -- # val=32 00:07:30.703 14:48:03 -- accel/accel.sh@22 -- # case "$var" in 00:07:30.703 14:48:03 -- accel/accel.sh@20 -- # IFS=: 00:07:30.703 14:48:03 -- accel/accel.sh@20 -- # read -r var val 00:07:30.703 14:48:03 -- accel/accel.sh@21 -- # val=32 00:07:30.703 14:48:03 -- accel/accel.sh@22 -- # case "$var" in 00:07:30.703 14:48:03 -- accel/accel.sh@20 -- # IFS=: 00:07:30.703 14:48:03 -- accel/accel.sh@20 -- # read -r var val 00:07:30.703 14:48:03 -- accel/accel.sh@21 -- # val=1 00:07:30.703 14:48:03 -- accel/accel.sh@22 -- # case "$var" in 00:07:30.703 14:48:03 -- accel/accel.sh@20 -- # IFS=: 00:07:30.703 14:48:03 -- accel/accel.sh@20 -- # read -r var val 00:07:30.703 14:48:03 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:30.703 14:48:03 -- accel/accel.sh@22 -- # case "$var" in 00:07:30.703 14:48:03 -- accel/accel.sh@20 -- # IFS=: 00:07:30.703 14:48:03 -- accel/accel.sh@20 -- # read -r var val 00:07:30.703 14:48:03 -- accel/accel.sh@21 -- # val=No 00:07:30.703 14:48:03 -- accel/accel.sh@22 -- # case "$var" in 00:07:30.703 14:48:03 -- accel/accel.sh@20 -- # IFS=: 00:07:30.703 14:48:03 -- accel/accel.sh@20 -- # read -r var val 00:07:30.703 14:48:03 -- accel/accel.sh@21 -- # val= 00:07:30.703 14:48:03 -- accel/accel.sh@22 -- # case "$var" in 00:07:30.703 14:48:03 -- accel/accel.sh@20 -- # IFS=: 00:07:30.703 14:48:03 -- accel/accel.sh@20 -- # read -r var val 00:07:30.703 14:48:03 -- accel/accel.sh@21 -- # val= 00:07:30.703 14:48:03 -- accel/accel.sh@22 -- # case "$var" in 00:07:30.703 14:48:03 -- accel/accel.sh@20 -- # IFS=: 00:07:30.703 14:48:03 -- accel/accel.sh@20 -- # read -r var val 00:07:32.078 14:48:04 -- accel/accel.sh@21 -- # val= 00:07:32.078 14:48:04 -- accel/accel.sh@22 -- # case "$var" in 00:07:32.078 14:48:04 -- accel/accel.sh@20 -- # IFS=: 00:07:32.078 14:48:04 -- accel/accel.sh@20 -- # read -r var val 00:07:32.078 14:48:04 -- accel/accel.sh@21 -- # val= 00:07:32.078 14:48:04 -- accel/accel.sh@22 -- # case "$var" in 00:07:32.078 14:48:04 -- accel/accel.sh@20 -- # IFS=: 00:07:32.078 14:48:04 -- accel/accel.sh@20 -- # read -r var val 00:07:32.078 14:48:04 -- accel/accel.sh@21 -- # val= 00:07:32.078 14:48:04 -- accel/accel.sh@22 -- # case "$var" in 00:07:32.078 14:48:04 -- accel/accel.sh@20 -- # IFS=: 00:07:32.078 14:48:04 -- accel/accel.sh@20 -- # read -r var val 00:07:32.078 14:48:04 -- accel/accel.sh@21 -- # val= 00:07:32.078 14:48:04 -- accel/accel.sh@22 -- # case "$var" in 00:07:32.078 14:48:04 -- accel/accel.sh@20 -- # IFS=: 00:07:32.078 14:48:04 -- accel/accel.sh@20 -- # read -r var val 00:07:32.078 14:48:04 -- accel/accel.sh@21 -- # val= 00:07:32.078 14:48:04 -- accel/accel.sh@22 -- # case "$var" in 00:07:32.078 14:48:04 -- accel/accel.sh@20 -- # IFS=: 00:07:32.078 14:48:04 -- accel/accel.sh@20 -- # read -r var val 00:07:32.078 14:48:04 -- accel/accel.sh@21 -- # val= 00:07:32.078 ************************************ 00:07:32.078 END TEST accel_comp 00:07:32.078 ************************************ 00:07:32.078 14:48:04 -- accel/accel.sh@22 -- # case "$var" in 00:07:32.078 14:48:04 -- accel/accel.sh@20 -- # IFS=: 00:07:32.078 14:48:04 -- accel/accel.sh@20 -- # read -r var val 00:07:32.078 14:48:04 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:32.078 14:48:04 -- accel/accel.sh@28 -- # [[ -n compress ]] 00:07:32.078 14:48:04 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:32.078 00:07:32.078 real 0m2.803s 00:07:32.078 user 0m2.363s 00:07:32.078 sys 0m0.231s 00:07:32.078 14:48:04 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:32.078 14:48:04 -- common/autotest_common.sh@10 -- # set +x 00:07:32.078 14:48:04 -- accel/accel.sh@109 -- # run_test accel_decomp accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:07:32.078 14:48:04 -- common/autotest_common.sh@1087 -- # '[' 9 -le 1 ']' 00:07:32.078 14:48:04 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:32.078 14:48:04 -- common/autotest_common.sh@10 -- # set +x 00:07:32.078 ************************************ 00:07:32.078 START TEST accel_decomp 00:07:32.078 ************************************ 00:07:32.078 14:48:04 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:07:32.078 14:48:04 -- accel/accel.sh@16 -- # local accel_opc 00:07:32.079 14:48:04 -- accel/accel.sh@17 -- # local accel_module 00:07:32.079 14:48:04 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:07:32.079 14:48:04 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:07:32.079 14:48:04 -- accel/accel.sh@12 -- # build_accel_config 00:07:32.079 14:48:04 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:32.079 14:48:04 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:32.079 14:48:04 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:32.079 14:48:04 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:32.079 14:48:04 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:32.079 14:48:04 -- accel/accel.sh@41 -- # local IFS=, 00:07:32.079 14:48:04 -- accel/accel.sh@42 -- # jq -r . 00:07:32.079 [2024-12-01 14:48:04.961302] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:32.079 [2024-12-01 14:48:04.961415] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71253 ] 00:07:32.079 [2024-12-01 14:48:05.098704] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:32.079 [2024-12-01 14:48:05.148431] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:33.453 14:48:06 -- accel/accel.sh@18 -- # out='Preparing input file... 00:07:33.453 00:07:33.453 SPDK Configuration: 00:07:33.453 Core mask: 0x1 00:07:33.453 00:07:33.453 Accel Perf Configuration: 00:07:33.453 Workload Type: decompress 00:07:33.453 Transfer size: 4096 bytes 00:07:33.453 Vector count 1 00:07:33.453 Module: software 00:07:33.453 File Name: /home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:33.453 Queue depth: 32 00:07:33.453 Allocate depth: 32 00:07:33.453 # threads/core: 1 00:07:33.453 Run time: 1 seconds 00:07:33.453 Verify: Yes 00:07:33.453 00:07:33.453 Running for 1 seconds... 00:07:33.453 00:07:33.453 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:33.453 ------------------------------------------------------------------------------------ 00:07:33.453 0,0 86240/s 158 MiB/s 0 0 00:07:33.453 ==================================================================================== 00:07:33.453 Total 86240/s 336 MiB/s 0 0' 00:07:33.453 14:48:06 -- accel/accel.sh@20 -- # IFS=: 00:07:33.453 14:48:06 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:07:33.453 14:48:06 -- accel/accel.sh@20 -- # read -r var val 00:07:33.453 14:48:06 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:07:33.453 14:48:06 -- accel/accel.sh@12 -- # build_accel_config 00:07:33.453 14:48:06 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:33.453 14:48:06 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:33.453 14:48:06 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:33.453 14:48:06 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:33.453 14:48:06 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:33.453 14:48:06 -- accel/accel.sh@41 -- # local IFS=, 00:07:33.453 14:48:06 -- accel/accel.sh@42 -- # jq -r . 00:07:33.453 [2024-12-01 14:48:06.357326] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:33.453 [2024-12-01 14:48:06.357591] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71267 ] 00:07:33.453 [2024-12-01 14:48:06.494042] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:33.453 [2024-12-01 14:48:06.544264] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:33.711 14:48:06 -- accel/accel.sh@21 -- # val= 00:07:33.711 14:48:06 -- accel/accel.sh@22 -- # case "$var" in 00:07:33.711 14:48:06 -- accel/accel.sh@20 -- # IFS=: 00:07:33.711 14:48:06 -- accel/accel.sh@20 -- # read -r var val 00:07:33.711 14:48:06 -- accel/accel.sh@21 -- # val= 00:07:33.711 14:48:06 -- accel/accel.sh@22 -- # case "$var" in 00:07:33.711 14:48:06 -- accel/accel.sh@20 -- # IFS=: 00:07:33.711 14:48:06 -- accel/accel.sh@20 -- # read -r var val 00:07:33.711 14:48:06 -- accel/accel.sh@21 -- # val= 00:07:33.711 14:48:06 -- accel/accel.sh@22 -- # case "$var" in 00:07:33.711 14:48:06 -- accel/accel.sh@20 -- # IFS=: 00:07:33.711 14:48:06 -- accel/accel.sh@20 -- # read -r var val 00:07:33.711 14:48:06 -- accel/accel.sh@21 -- # val=0x1 00:07:33.711 14:48:06 -- accel/accel.sh@22 -- # case "$var" in 00:07:33.711 14:48:06 -- accel/accel.sh@20 -- # IFS=: 00:07:33.711 14:48:06 -- accel/accel.sh@20 -- # read -r var val 00:07:33.711 14:48:06 -- accel/accel.sh@21 -- # val= 00:07:33.711 14:48:06 -- accel/accel.sh@22 -- # case "$var" in 00:07:33.711 14:48:06 -- accel/accel.sh@20 -- # IFS=: 00:07:33.711 14:48:06 -- accel/accel.sh@20 -- # read -r var val 00:07:33.711 14:48:06 -- accel/accel.sh@21 -- # val= 00:07:33.711 14:48:06 -- accel/accel.sh@22 -- # case "$var" in 00:07:33.711 14:48:06 -- accel/accel.sh@20 -- # IFS=: 00:07:33.711 14:48:06 -- accel/accel.sh@20 -- # read -r var val 00:07:33.711 14:48:06 -- accel/accel.sh@21 -- # val=decompress 00:07:33.711 14:48:06 -- accel/accel.sh@22 -- # case "$var" in 00:07:33.711 14:48:06 -- accel/accel.sh@24 -- # accel_opc=decompress 00:07:33.711 14:48:06 -- accel/accel.sh@20 -- # IFS=: 00:07:33.711 14:48:06 -- accel/accel.sh@20 -- # read -r var val 00:07:33.711 14:48:06 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:33.711 14:48:06 -- accel/accel.sh@22 -- # case "$var" in 00:07:33.711 14:48:06 -- accel/accel.sh@20 -- # IFS=: 00:07:33.711 14:48:06 -- accel/accel.sh@20 -- # read -r var val 00:07:33.711 14:48:06 -- accel/accel.sh@21 -- # val= 00:07:33.711 14:48:06 -- accel/accel.sh@22 -- # case "$var" in 00:07:33.711 14:48:06 -- accel/accel.sh@20 -- # IFS=: 00:07:33.711 14:48:06 -- accel/accel.sh@20 -- # read -r var val 00:07:33.711 14:48:06 -- accel/accel.sh@21 -- # val=software 00:07:33.711 14:48:06 -- accel/accel.sh@22 -- # case "$var" in 00:07:33.711 14:48:06 -- accel/accel.sh@23 -- # accel_module=software 00:07:33.711 14:48:06 -- accel/accel.sh@20 -- # IFS=: 00:07:33.711 14:48:06 -- accel/accel.sh@20 -- # read -r var val 00:07:33.711 14:48:06 -- accel/accel.sh@21 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:33.711 14:48:06 -- accel/accel.sh@22 -- # case "$var" in 00:07:33.711 14:48:06 -- accel/accel.sh@20 -- # IFS=: 00:07:33.711 14:48:06 -- accel/accel.sh@20 -- # read -r var val 00:07:33.711 14:48:06 -- accel/accel.sh@21 -- # val=32 00:07:33.711 14:48:06 -- accel/accel.sh@22 -- # case "$var" in 00:07:33.711 14:48:06 -- accel/accel.sh@20 -- # IFS=: 00:07:33.711 14:48:06 -- accel/accel.sh@20 -- # read -r var val 00:07:33.711 14:48:06 -- accel/accel.sh@21 -- # val=32 00:07:33.711 14:48:06 -- accel/accel.sh@22 -- # case "$var" in 00:07:33.711 14:48:06 -- accel/accel.sh@20 -- # IFS=: 00:07:33.711 14:48:06 -- accel/accel.sh@20 -- # read -r var val 00:07:33.711 14:48:06 -- accel/accel.sh@21 -- # val=1 00:07:33.711 14:48:06 -- accel/accel.sh@22 -- # case "$var" in 00:07:33.711 14:48:06 -- accel/accel.sh@20 -- # IFS=: 00:07:33.711 14:48:06 -- accel/accel.sh@20 -- # read -r var val 00:07:33.711 14:48:06 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:33.711 14:48:06 -- accel/accel.sh@22 -- # case "$var" in 00:07:33.711 14:48:06 -- accel/accel.sh@20 -- # IFS=: 00:07:33.711 14:48:06 -- accel/accel.sh@20 -- # read -r var val 00:07:33.711 14:48:06 -- accel/accel.sh@21 -- # val=Yes 00:07:33.711 14:48:06 -- accel/accel.sh@22 -- # case "$var" in 00:07:33.711 14:48:06 -- accel/accel.sh@20 -- # IFS=: 00:07:33.711 14:48:06 -- accel/accel.sh@20 -- # read -r var val 00:07:33.711 14:48:06 -- accel/accel.sh@21 -- # val= 00:07:33.711 14:48:06 -- accel/accel.sh@22 -- # case "$var" in 00:07:33.711 14:48:06 -- accel/accel.sh@20 -- # IFS=: 00:07:33.711 14:48:06 -- accel/accel.sh@20 -- # read -r var val 00:07:33.711 14:48:06 -- accel/accel.sh@21 -- # val= 00:07:33.711 14:48:06 -- accel/accel.sh@22 -- # case "$var" in 00:07:33.711 14:48:06 -- accel/accel.sh@20 -- # IFS=: 00:07:33.711 14:48:06 -- accel/accel.sh@20 -- # read -r var val 00:07:34.644 14:48:07 -- accel/accel.sh@21 -- # val= 00:07:34.644 14:48:07 -- accel/accel.sh@22 -- # case "$var" in 00:07:34.644 14:48:07 -- accel/accel.sh@20 -- # IFS=: 00:07:34.644 14:48:07 -- accel/accel.sh@20 -- # read -r var val 00:07:34.644 14:48:07 -- accel/accel.sh@21 -- # val= 00:07:34.644 14:48:07 -- accel/accel.sh@22 -- # case "$var" in 00:07:34.644 14:48:07 -- accel/accel.sh@20 -- # IFS=: 00:07:34.644 14:48:07 -- accel/accel.sh@20 -- # read -r var val 00:07:34.644 14:48:07 -- accel/accel.sh@21 -- # val= 00:07:34.644 14:48:07 -- accel/accel.sh@22 -- # case "$var" in 00:07:34.644 14:48:07 -- accel/accel.sh@20 -- # IFS=: 00:07:34.644 14:48:07 -- accel/accel.sh@20 -- # read -r var val 00:07:34.644 14:48:07 -- accel/accel.sh@21 -- # val= 00:07:34.644 14:48:07 -- accel/accel.sh@22 -- # case "$var" in 00:07:34.644 14:48:07 -- accel/accel.sh@20 -- # IFS=: 00:07:34.644 14:48:07 -- accel/accel.sh@20 -- # read -r var val 00:07:34.644 14:48:07 -- accel/accel.sh@21 -- # val= 00:07:34.644 14:48:07 -- accel/accel.sh@22 -- # case "$var" in 00:07:34.644 14:48:07 -- accel/accel.sh@20 -- # IFS=: 00:07:34.644 14:48:07 -- accel/accel.sh@20 -- # read -r var val 00:07:34.644 14:48:07 -- accel/accel.sh@21 -- # val= 00:07:34.644 14:48:07 -- accel/accel.sh@22 -- # case "$var" in 00:07:34.644 14:48:07 -- accel/accel.sh@20 -- # IFS=: 00:07:34.644 ************************************ 00:07:34.644 END TEST accel_decomp 00:07:34.644 ************************************ 00:07:34.644 14:48:07 -- accel/accel.sh@20 -- # read -r var val 00:07:34.644 14:48:07 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:34.644 14:48:07 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:07:34.644 14:48:07 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:34.644 00:07:34.644 real 0m2.801s 00:07:34.644 user 0m2.358s 00:07:34.644 sys 0m0.237s 00:07:34.644 14:48:07 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:34.644 14:48:07 -- common/autotest_common.sh@10 -- # set +x 00:07:34.902 14:48:07 -- accel/accel.sh@110 -- # run_test accel_decmop_full accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:07:34.902 14:48:07 -- common/autotest_common.sh@1087 -- # '[' 11 -le 1 ']' 00:07:34.902 14:48:07 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:34.902 14:48:07 -- common/autotest_common.sh@10 -- # set +x 00:07:34.902 ************************************ 00:07:34.902 START TEST accel_decmop_full 00:07:34.902 ************************************ 00:07:34.902 14:48:07 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:07:34.902 14:48:07 -- accel/accel.sh@16 -- # local accel_opc 00:07:34.902 14:48:07 -- accel/accel.sh@17 -- # local accel_module 00:07:34.902 14:48:07 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:07:34.902 14:48:07 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:07:34.902 14:48:07 -- accel/accel.sh@12 -- # build_accel_config 00:07:34.902 14:48:07 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:34.902 14:48:07 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:34.902 14:48:07 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:34.902 14:48:07 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:34.902 14:48:07 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:34.902 14:48:07 -- accel/accel.sh@41 -- # local IFS=, 00:07:34.902 14:48:07 -- accel/accel.sh@42 -- # jq -r . 00:07:34.902 [2024-12-01 14:48:07.811303] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:34.902 [2024-12-01 14:48:07.811581] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71302 ] 00:07:34.902 [2024-12-01 14:48:07.944656] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:34.902 [2024-12-01 14:48:07.995626] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:36.305 14:48:09 -- accel/accel.sh@18 -- # out='Preparing input file... 00:07:36.305 00:07:36.305 SPDK Configuration: 00:07:36.305 Core mask: 0x1 00:07:36.305 00:07:36.305 Accel Perf Configuration: 00:07:36.305 Workload Type: decompress 00:07:36.305 Transfer size: 111250 bytes 00:07:36.305 Vector count 1 00:07:36.305 Module: software 00:07:36.305 File Name: /home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:36.305 Queue depth: 32 00:07:36.305 Allocate depth: 32 00:07:36.305 # threads/core: 1 00:07:36.305 Run time: 1 seconds 00:07:36.305 Verify: Yes 00:07:36.305 00:07:36.305 Running for 1 seconds... 00:07:36.305 00:07:36.305 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:36.305 ------------------------------------------------------------------------------------ 00:07:36.305 0,0 5760/s 237 MiB/s 0 0 00:07:36.305 ==================================================================================== 00:07:36.305 Total 5760/s 611 MiB/s 0 0' 00:07:36.305 14:48:09 -- accel/accel.sh@20 -- # IFS=: 00:07:36.305 14:48:09 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:07:36.305 14:48:09 -- accel/accel.sh@20 -- # read -r var val 00:07:36.305 14:48:09 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:07:36.305 14:48:09 -- accel/accel.sh@12 -- # build_accel_config 00:07:36.305 14:48:09 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:36.305 14:48:09 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:36.305 14:48:09 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:36.305 14:48:09 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:36.305 14:48:09 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:36.305 14:48:09 -- accel/accel.sh@41 -- # local IFS=, 00:07:36.305 14:48:09 -- accel/accel.sh@42 -- # jq -r . 00:07:36.305 [2024-12-01 14:48:09.199503] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:36.305 [2024-12-01 14:48:09.199580] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71321 ] 00:07:36.305 [2024-12-01 14:48:09.330087] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:36.305 [2024-12-01 14:48:09.378061] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:36.564 14:48:09 -- accel/accel.sh@21 -- # val= 00:07:36.564 14:48:09 -- accel/accel.sh@22 -- # case "$var" in 00:07:36.564 14:48:09 -- accel/accel.sh@20 -- # IFS=: 00:07:36.564 14:48:09 -- accel/accel.sh@20 -- # read -r var val 00:07:36.564 14:48:09 -- accel/accel.sh@21 -- # val= 00:07:36.564 14:48:09 -- accel/accel.sh@22 -- # case "$var" in 00:07:36.564 14:48:09 -- accel/accel.sh@20 -- # IFS=: 00:07:36.564 14:48:09 -- accel/accel.sh@20 -- # read -r var val 00:07:36.564 14:48:09 -- accel/accel.sh@21 -- # val= 00:07:36.564 14:48:09 -- accel/accel.sh@22 -- # case "$var" in 00:07:36.564 14:48:09 -- accel/accel.sh@20 -- # IFS=: 00:07:36.564 14:48:09 -- accel/accel.sh@20 -- # read -r var val 00:07:36.564 14:48:09 -- accel/accel.sh@21 -- # val=0x1 00:07:36.564 14:48:09 -- accel/accel.sh@22 -- # case "$var" in 00:07:36.564 14:48:09 -- accel/accel.sh@20 -- # IFS=: 00:07:36.564 14:48:09 -- accel/accel.sh@20 -- # read -r var val 00:07:36.564 14:48:09 -- accel/accel.sh@21 -- # val= 00:07:36.564 14:48:09 -- accel/accel.sh@22 -- # case "$var" in 00:07:36.564 14:48:09 -- accel/accel.sh@20 -- # IFS=: 00:07:36.564 14:48:09 -- accel/accel.sh@20 -- # read -r var val 00:07:36.564 14:48:09 -- accel/accel.sh@21 -- # val= 00:07:36.564 14:48:09 -- accel/accel.sh@22 -- # case "$var" in 00:07:36.564 14:48:09 -- accel/accel.sh@20 -- # IFS=: 00:07:36.564 14:48:09 -- accel/accel.sh@20 -- # read -r var val 00:07:36.564 14:48:09 -- accel/accel.sh@21 -- # val=decompress 00:07:36.564 14:48:09 -- accel/accel.sh@22 -- # case "$var" in 00:07:36.564 14:48:09 -- accel/accel.sh@24 -- # accel_opc=decompress 00:07:36.564 14:48:09 -- accel/accel.sh@20 -- # IFS=: 00:07:36.564 14:48:09 -- accel/accel.sh@20 -- # read -r var val 00:07:36.564 14:48:09 -- accel/accel.sh@21 -- # val='111250 bytes' 00:07:36.564 14:48:09 -- accel/accel.sh@22 -- # case "$var" in 00:07:36.564 14:48:09 -- accel/accel.sh@20 -- # IFS=: 00:07:36.564 14:48:09 -- accel/accel.sh@20 -- # read -r var val 00:07:36.564 14:48:09 -- accel/accel.sh@21 -- # val= 00:07:36.564 14:48:09 -- accel/accel.sh@22 -- # case "$var" in 00:07:36.564 14:48:09 -- accel/accel.sh@20 -- # IFS=: 00:07:36.564 14:48:09 -- accel/accel.sh@20 -- # read -r var val 00:07:36.564 14:48:09 -- accel/accel.sh@21 -- # val=software 00:07:36.564 14:48:09 -- accel/accel.sh@22 -- # case "$var" in 00:07:36.564 14:48:09 -- accel/accel.sh@23 -- # accel_module=software 00:07:36.564 14:48:09 -- accel/accel.sh@20 -- # IFS=: 00:07:36.564 14:48:09 -- accel/accel.sh@20 -- # read -r var val 00:07:36.564 14:48:09 -- accel/accel.sh@21 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:36.564 14:48:09 -- accel/accel.sh@22 -- # case "$var" in 00:07:36.564 14:48:09 -- accel/accel.sh@20 -- # IFS=: 00:07:36.564 14:48:09 -- accel/accel.sh@20 -- # read -r var val 00:07:36.564 14:48:09 -- accel/accel.sh@21 -- # val=32 00:07:36.564 14:48:09 -- accel/accel.sh@22 -- # case "$var" in 00:07:36.564 14:48:09 -- accel/accel.sh@20 -- # IFS=: 00:07:36.564 14:48:09 -- accel/accel.sh@20 -- # read -r var val 00:07:36.564 14:48:09 -- accel/accel.sh@21 -- # val=32 00:07:36.564 14:48:09 -- accel/accel.sh@22 -- # case "$var" in 00:07:36.564 14:48:09 -- accel/accel.sh@20 -- # IFS=: 00:07:36.564 14:48:09 -- accel/accel.sh@20 -- # read -r var val 00:07:36.564 14:48:09 -- accel/accel.sh@21 -- # val=1 00:07:36.564 14:48:09 -- accel/accel.sh@22 -- # case "$var" in 00:07:36.564 14:48:09 -- accel/accel.sh@20 -- # IFS=: 00:07:36.564 14:48:09 -- accel/accel.sh@20 -- # read -r var val 00:07:36.564 14:48:09 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:36.564 14:48:09 -- accel/accel.sh@22 -- # case "$var" in 00:07:36.564 14:48:09 -- accel/accel.sh@20 -- # IFS=: 00:07:36.564 14:48:09 -- accel/accel.sh@20 -- # read -r var val 00:07:36.564 14:48:09 -- accel/accel.sh@21 -- # val=Yes 00:07:36.564 14:48:09 -- accel/accel.sh@22 -- # case "$var" in 00:07:36.564 14:48:09 -- accel/accel.sh@20 -- # IFS=: 00:07:36.564 14:48:09 -- accel/accel.sh@20 -- # read -r var val 00:07:36.564 14:48:09 -- accel/accel.sh@21 -- # val= 00:07:36.564 14:48:09 -- accel/accel.sh@22 -- # case "$var" in 00:07:36.564 14:48:09 -- accel/accel.sh@20 -- # IFS=: 00:07:36.564 14:48:09 -- accel/accel.sh@20 -- # read -r var val 00:07:36.564 14:48:09 -- accel/accel.sh@21 -- # val= 00:07:36.564 14:48:09 -- accel/accel.sh@22 -- # case "$var" in 00:07:36.564 14:48:09 -- accel/accel.sh@20 -- # IFS=: 00:07:36.564 14:48:09 -- accel/accel.sh@20 -- # read -r var val 00:07:37.497 14:48:10 -- accel/accel.sh@21 -- # val= 00:07:37.497 14:48:10 -- accel/accel.sh@22 -- # case "$var" in 00:07:37.497 14:48:10 -- accel/accel.sh@20 -- # IFS=: 00:07:37.497 14:48:10 -- accel/accel.sh@20 -- # read -r var val 00:07:37.497 14:48:10 -- accel/accel.sh@21 -- # val= 00:07:37.497 14:48:10 -- accel/accel.sh@22 -- # case "$var" in 00:07:37.497 14:48:10 -- accel/accel.sh@20 -- # IFS=: 00:07:37.497 14:48:10 -- accel/accel.sh@20 -- # read -r var val 00:07:37.497 14:48:10 -- accel/accel.sh@21 -- # val= 00:07:37.497 14:48:10 -- accel/accel.sh@22 -- # case "$var" in 00:07:37.497 14:48:10 -- accel/accel.sh@20 -- # IFS=: 00:07:37.497 14:48:10 -- accel/accel.sh@20 -- # read -r var val 00:07:37.497 14:48:10 -- accel/accel.sh@21 -- # val= 00:07:37.497 14:48:10 -- accel/accel.sh@22 -- # case "$var" in 00:07:37.497 14:48:10 -- accel/accel.sh@20 -- # IFS=: 00:07:37.497 14:48:10 -- accel/accel.sh@20 -- # read -r var val 00:07:37.497 14:48:10 -- accel/accel.sh@21 -- # val= 00:07:37.497 14:48:10 -- accel/accel.sh@22 -- # case "$var" in 00:07:37.497 14:48:10 -- accel/accel.sh@20 -- # IFS=: 00:07:37.497 14:48:10 -- accel/accel.sh@20 -- # read -r var val 00:07:37.497 14:48:10 -- accel/accel.sh@21 -- # val= 00:07:37.497 ************************************ 00:07:37.497 END TEST accel_decmop_full 00:07:37.497 ************************************ 00:07:37.497 14:48:10 -- accel/accel.sh@22 -- # case "$var" in 00:07:37.497 14:48:10 -- accel/accel.sh@20 -- # IFS=: 00:07:37.497 14:48:10 -- accel/accel.sh@20 -- # read -r var val 00:07:37.497 14:48:10 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:37.497 14:48:10 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:07:37.497 14:48:10 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:37.497 00:07:37.497 real 0m2.782s 00:07:37.497 user 0m2.379s 00:07:37.497 sys 0m0.199s 00:07:37.497 14:48:10 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:37.497 14:48:10 -- common/autotest_common.sh@10 -- # set +x 00:07:37.756 14:48:10 -- accel/accel.sh@111 -- # run_test accel_decomp_mcore accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:07:37.756 14:48:10 -- common/autotest_common.sh@1087 -- # '[' 11 -le 1 ']' 00:07:37.756 14:48:10 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:37.756 14:48:10 -- common/autotest_common.sh@10 -- # set +x 00:07:37.756 ************************************ 00:07:37.756 START TEST accel_decomp_mcore 00:07:37.756 ************************************ 00:07:37.756 14:48:10 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:07:37.756 14:48:10 -- accel/accel.sh@16 -- # local accel_opc 00:07:37.756 14:48:10 -- accel/accel.sh@17 -- # local accel_module 00:07:37.756 14:48:10 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:07:37.756 14:48:10 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:07:37.756 14:48:10 -- accel/accel.sh@12 -- # build_accel_config 00:07:37.756 14:48:10 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:37.756 14:48:10 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:37.756 14:48:10 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:37.756 14:48:10 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:37.756 14:48:10 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:37.756 14:48:10 -- accel/accel.sh@41 -- # local IFS=, 00:07:37.756 14:48:10 -- accel/accel.sh@42 -- # jq -r . 00:07:37.756 [2024-12-01 14:48:10.649685] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:37.756 [2024-12-01 14:48:10.649816] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71350 ] 00:07:37.756 [2024-12-01 14:48:10.780430] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:37.756 [2024-12-01 14:48:10.835278] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:37.756 [2024-12-01 14:48:10.835431] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:37.756 [2024-12-01 14:48:10.835553] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:07:37.756 [2024-12-01 14:48:10.835703] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:39.133 14:48:12 -- accel/accel.sh@18 -- # out='Preparing input file... 00:07:39.133 00:07:39.133 SPDK Configuration: 00:07:39.133 Core mask: 0xf 00:07:39.133 00:07:39.133 Accel Perf Configuration: 00:07:39.133 Workload Type: decompress 00:07:39.133 Transfer size: 4096 bytes 00:07:39.133 Vector count 1 00:07:39.133 Module: software 00:07:39.133 File Name: /home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:39.133 Queue depth: 32 00:07:39.133 Allocate depth: 32 00:07:39.133 # threads/core: 1 00:07:39.133 Run time: 1 seconds 00:07:39.133 Verify: Yes 00:07:39.133 00:07:39.133 Running for 1 seconds... 00:07:39.133 00:07:39.133 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:39.133 ------------------------------------------------------------------------------------ 00:07:39.133 0,0 57728/s 106 MiB/s 0 0 00:07:39.133 3,0 52736/s 97 MiB/s 0 0 00:07:39.133 2,0 51424/s 94 MiB/s 0 0 00:07:39.133 1,0 53440/s 98 MiB/s 0 0 00:07:39.133 ==================================================================================== 00:07:39.133 Total 215328/s 841 MiB/s 0 0' 00:07:39.133 14:48:12 -- accel/accel.sh@20 -- # IFS=: 00:07:39.133 14:48:12 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:07:39.133 14:48:12 -- accel/accel.sh@20 -- # read -r var val 00:07:39.133 14:48:12 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:07:39.133 14:48:12 -- accel/accel.sh@12 -- # build_accel_config 00:07:39.133 14:48:12 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:39.133 14:48:12 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:39.133 14:48:12 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:39.133 14:48:12 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:39.133 14:48:12 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:39.133 14:48:12 -- accel/accel.sh@41 -- # local IFS=, 00:07:39.133 14:48:12 -- accel/accel.sh@42 -- # jq -r . 00:07:39.133 [2024-12-01 14:48:12.056615] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:39.133 [2024-12-01 14:48:12.056699] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71373 ] 00:07:39.133 [2024-12-01 14:48:12.194318] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:39.133 [2024-12-01 14:48:12.242922] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:39.133 [2024-12-01 14:48:12.243068] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:39.133 [2024-12-01 14:48:12.243447] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:07:39.133 [2024-12-01 14:48:12.243716] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:39.392 14:48:12 -- accel/accel.sh@21 -- # val= 00:07:39.392 14:48:12 -- accel/accel.sh@22 -- # case "$var" in 00:07:39.392 14:48:12 -- accel/accel.sh@20 -- # IFS=: 00:07:39.392 14:48:12 -- accel/accel.sh@20 -- # read -r var val 00:07:39.392 14:48:12 -- accel/accel.sh@21 -- # val= 00:07:39.392 14:48:12 -- accel/accel.sh@22 -- # case "$var" in 00:07:39.392 14:48:12 -- accel/accel.sh@20 -- # IFS=: 00:07:39.392 14:48:12 -- accel/accel.sh@20 -- # read -r var val 00:07:39.392 14:48:12 -- accel/accel.sh@21 -- # val= 00:07:39.392 14:48:12 -- accel/accel.sh@22 -- # case "$var" in 00:07:39.392 14:48:12 -- accel/accel.sh@20 -- # IFS=: 00:07:39.392 14:48:12 -- accel/accel.sh@20 -- # read -r var val 00:07:39.392 14:48:12 -- accel/accel.sh@21 -- # val=0xf 00:07:39.392 14:48:12 -- accel/accel.sh@22 -- # case "$var" in 00:07:39.392 14:48:12 -- accel/accel.sh@20 -- # IFS=: 00:07:39.392 14:48:12 -- accel/accel.sh@20 -- # read -r var val 00:07:39.392 14:48:12 -- accel/accel.sh@21 -- # val= 00:07:39.392 14:48:12 -- accel/accel.sh@22 -- # case "$var" in 00:07:39.392 14:48:12 -- accel/accel.sh@20 -- # IFS=: 00:07:39.392 14:48:12 -- accel/accel.sh@20 -- # read -r var val 00:07:39.392 14:48:12 -- accel/accel.sh@21 -- # val= 00:07:39.392 14:48:12 -- accel/accel.sh@22 -- # case "$var" in 00:07:39.392 14:48:12 -- accel/accel.sh@20 -- # IFS=: 00:07:39.392 14:48:12 -- accel/accel.sh@20 -- # read -r var val 00:07:39.392 14:48:12 -- accel/accel.sh@21 -- # val=decompress 00:07:39.392 14:48:12 -- accel/accel.sh@22 -- # case "$var" in 00:07:39.392 14:48:12 -- accel/accel.sh@24 -- # accel_opc=decompress 00:07:39.392 14:48:12 -- accel/accel.sh@20 -- # IFS=: 00:07:39.392 14:48:12 -- accel/accel.sh@20 -- # read -r var val 00:07:39.392 14:48:12 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:39.392 14:48:12 -- accel/accel.sh@22 -- # case "$var" in 00:07:39.392 14:48:12 -- accel/accel.sh@20 -- # IFS=: 00:07:39.392 14:48:12 -- accel/accel.sh@20 -- # read -r var val 00:07:39.392 14:48:12 -- accel/accel.sh@21 -- # val= 00:07:39.392 14:48:12 -- accel/accel.sh@22 -- # case "$var" in 00:07:39.392 14:48:12 -- accel/accel.sh@20 -- # IFS=: 00:07:39.392 14:48:12 -- accel/accel.sh@20 -- # read -r var val 00:07:39.392 14:48:12 -- accel/accel.sh@21 -- # val=software 00:07:39.392 14:48:12 -- accel/accel.sh@22 -- # case "$var" in 00:07:39.392 14:48:12 -- accel/accel.sh@23 -- # accel_module=software 00:07:39.392 14:48:12 -- accel/accel.sh@20 -- # IFS=: 00:07:39.392 14:48:12 -- accel/accel.sh@20 -- # read -r var val 00:07:39.392 14:48:12 -- accel/accel.sh@21 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:39.392 14:48:12 -- accel/accel.sh@22 -- # case "$var" in 00:07:39.392 14:48:12 -- accel/accel.sh@20 -- # IFS=: 00:07:39.392 14:48:12 -- accel/accel.sh@20 -- # read -r var val 00:07:39.392 14:48:12 -- accel/accel.sh@21 -- # val=32 00:07:39.392 14:48:12 -- accel/accel.sh@22 -- # case "$var" in 00:07:39.392 14:48:12 -- accel/accel.sh@20 -- # IFS=: 00:07:39.392 14:48:12 -- accel/accel.sh@20 -- # read -r var val 00:07:39.392 14:48:12 -- accel/accel.sh@21 -- # val=32 00:07:39.392 14:48:12 -- accel/accel.sh@22 -- # case "$var" in 00:07:39.392 14:48:12 -- accel/accel.sh@20 -- # IFS=: 00:07:39.392 14:48:12 -- accel/accel.sh@20 -- # read -r var val 00:07:39.392 14:48:12 -- accel/accel.sh@21 -- # val=1 00:07:39.392 14:48:12 -- accel/accel.sh@22 -- # case "$var" in 00:07:39.392 14:48:12 -- accel/accel.sh@20 -- # IFS=: 00:07:39.392 14:48:12 -- accel/accel.sh@20 -- # read -r var val 00:07:39.392 14:48:12 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:39.392 14:48:12 -- accel/accel.sh@22 -- # case "$var" in 00:07:39.392 14:48:12 -- accel/accel.sh@20 -- # IFS=: 00:07:39.392 14:48:12 -- accel/accel.sh@20 -- # read -r var val 00:07:39.392 14:48:12 -- accel/accel.sh@21 -- # val=Yes 00:07:39.392 14:48:12 -- accel/accel.sh@22 -- # case "$var" in 00:07:39.392 14:48:12 -- accel/accel.sh@20 -- # IFS=: 00:07:39.392 14:48:12 -- accel/accel.sh@20 -- # read -r var val 00:07:39.392 14:48:12 -- accel/accel.sh@21 -- # val= 00:07:39.392 14:48:12 -- accel/accel.sh@22 -- # case "$var" in 00:07:39.393 14:48:12 -- accel/accel.sh@20 -- # IFS=: 00:07:39.393 14:48:12 -- accel/accel.sh@20 -- # read -r var val 00:07:39.393 14:48:12 -- accel/accel.sh@21 -- # val= 00:07:39.393 14:48:12 -- accel/accel.sh@22 -- # case "$var" in 00:07:39.393 14:48:12 -- accel/accel.sh@20 -- # IFS=: 00:07:39.393 14:48:12 -- accel/accel.sh@20 -- # read -r var val 00:07:40.771 14:48:13 -- accel/accel.sh@21 -- # val= 00:07:40.771 14:48:13 -- accel/accel.sh@22 -- # case "$var" in 00:07:40.771 14:48:13 -- accel/accel.sh@20 -- # IFS=: 00:07:40.771 14:48:13 -- accel/accel.sh@20 -- # read -r var val 00:07:40.771 14:48:13 -- accel/accel.sh@21 -- # val= 00:07:40.771 14:48:13 -- accel/accel.sh@22 -- # case "$var" in 00:07:40.771 14:48:13 -- accel/accel.sh@20 -- # IFS=: 00:07:40.771 14:48:13 -- accel/accel.sh@20 -- # read -r var val 00:07:40.771 14:48:13 -- accel/accel.sh@21 -- # val= 00:07:40.771 14:48:13 -- accel/accel.sh@22 -- # case "$var" in 00:07:40.771 14:48:13 -- accel/accel.sh@20 -- # IFS=: 00:07:40.771 14:48:13 -- accel/accel.sh@20 -- # read -r var val 00:07:40.771 14:48:13 -- accel/accel.sh@21 -- # val= 00:07:40.771 14:48:13 -- accel/accel.sh@22 -- # case "$var" in 00:07:40.771 14:48:13 -- accel/accel.sh@20 -- # IFS=: 00:07:40.771 14:48:13 -- accel/accel.sh@20 -- # read -r var val 00:07:40.771 14:48:13 -- accel/accel.sh@21 -- # val= 00:07:40.771 14:48:13 -- accel/accel.sh@22 -- # case "$var" in 00:07:40.771 14:48:13 -- accel/accel.sh@20 -- # IFS=: 00:07:40.771 14:48:13 -- accel/accel.sh@20 -- # read -r var val 00:07:40.771 14:48:13 -- accel/accel.sh@21 -- # val= 00:07:40.771 14:48:13 -- accel/accel.sh@22 -- # case "$var" in 00:07:40.771 14:48:13 -- accel/accel.sh@20 -- # IFS=: 00:07:40.771 14:48:13 -- accel/accel.sh@20 -- # read -r var val 00:07:40.771 14:48:13 -- accel/accel.sh@21 -- # val= 00:07:40.771 14:48:13 -- accel/accel.sh@22 -- # case "$var" in 00:07:40.771 14:48:13 -- accel/accel.sh@20 -- # IFS=: 00:07:40.771 14:48:13 -- accel/accel.sh@20 -- # read -r var val 00:07:40.771 14:48:13 -- accel/accel.sh@21 -- # val= 00:07:40.771 14:48:13 -- accel/accel.sh@22 -- # case "$var" in 00:07:40.771 14:48:13 -- accel/accel.sh@20 -- # IFS=: 00:07:40.771 14:48:13 -- accel/accel.sh@20 -- # read -r var val 00:07:40.771 14:48:13 -- accel/accel.sh@21 -- # val= 00:07:40.771 14:48:13 -- accel/accel.sh@22 -- # case "$var" in 00:07:40.771 14:48:13 -- accel/accel.sh@20 -- # IFS=: 00:07:40.771 ************************************ 00:07:40.771 END TEST accel_decomp_mcore 00:07:40.771 ************************************ 00:07:40.771 14:48:13 -- accel/accel.sh@20 -- # read -r var val 00:07:40.771 14:48:13 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:40.771 14:48:13 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:07:40.771 14:48:13 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:40.771 00:07:40.771 real 0m2.834s 00:07:40.771 user 0m9.183s 00:07:40.771 sys 0m0.232s 00:07:40.771 14:48:13 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:40.771 14:48:13 -- common/autotest_common.sh@10 -- # set +x 00:07:40.771 14:48:13 -- accel/accel.sh@112 -- # run_test accel_decomp_full_mcore accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:40.771 14:48:13 -- common/autotest_common.sh@1087 -- # '[' 13 -le 1 ']' 00:07:40.771 14:48:13 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:40.771 14:48:13 -- common/autotest_common.sh@10 -- # set +x 00:07:40.771 ************************************ 00:07:40.771 START TEST accel_decomp_full_mcore 00:07:40.771 ************************************ 00:07:40.771 14:48:13 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:40.771 14:48:13 -- accel/accel.sh@16 -- # local accel_opc 00:07:40.771 14:48:13 -- accel/accel.sh@17 -- # local accel_module 00:07:40.771 14:48:13 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:40.771 14:48:13 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:40.771 14:48:13 -- accel/accel.sh@12 -- # build_accel_config 00:07:40.771 14:48:13 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:40.771 14:48:13 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:40.771 14:48:13 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:40.771 14:48:13 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:40.771 14:48:13 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:40.771 14:48:13 -- accel/accel.sh@41 -- # local IFS=, 00:07:40.771 14:48:13 -- accel/accel.sh@42 -- # jq -r . 00:07:40.771 [2024-12-01 14:48:13.533103] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:40.771 [2024-12-01 14:48:13.533801] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71410 ] 00:07:40.771 [2024-12-01 14:48:13.663455] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:40.771 [2024-12-01 14:48:13.717612] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:40.771 [2024-12-01 14:48:13.717780] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:40.771 [2024-12-01 14:48:13.717894] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:07:40.771 [2024-12-01 14:48:13.718265] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:42.151 14:48:14 -- accel/accel.sh@18 -- # out='Preparing input file... 00:07:42.151 00:07:42.151 SPDK Configuration: 00:07:42.151 Core mask: 0xf 00:07:42.151 00:07:42.151 Accel Perf Configuration: 00:07:42.151 Workload Type: decompress 00:07:42.151 Transfer size: 111250 bytes 00:07:42.151 Vector count 1 00:07:42.151 Module: software 00:07:42.151 File Name: /home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:42.151 Queue depth: 32 00:07:42.151 Allocate depth: 32 00:07:42.151 # threads/core: 1 00:07:42.151 Run time: 1 seconds 00:07:42.151 Verify: Yes 00:07:42.151 00:07:42.151 Running for 1 seconds... 00:07:42.151 00:07:42.151 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:42.151 ------------------------------------------------------------------------------------ 00:07:42.151 0,0 5536/s 228 MiB/s 0 0 00:07:42.151 3,0 5376/s 222 MiB/s 0 0 00:07:42.151 2,0 5408/s 223 MiB/s 0 0 00:07:42.151 1,0 5536/s 228 MiB/s 0 0 00:07:42.151 ==================================================================================== 00:07:42.151 Total 21856/s 2318 MiB/s 0 0' 00:07:42.151 14:48:14 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:42.151 14:48:14 -- accel/accel.sh@20 -- # IFS=: 00:07:42.151 14:48:14 -- accel/accel.sh@20 -- # read -r var val 00:07:42.151 14:48:14 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:42.151 14:48:14 -- accel/accel.sh@12 -- # build_accel_config 00:07:42.151 14:48:14 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:42.151 14:48:14 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:42.152 14:48:14 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:42.152 14:48:14 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:42.152 14:48:14 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:42.152 14:48:14 -- accel/accel.sh@41 -- # local IFS=, 00:07:42.152 14:48:14 -- accel/accel.sh@42 -- # jq -r . 00:07:42.152 [2024-12-01 14:48:14.939186] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:42.152 [2024-12-01 14:48:14.939254] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71433 ] 00:07:42.152 [2024-12-01 14:48:15.070629] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:42.152 [2024-12-01 14:48:15.116916] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:42.152 [2024-12-01 14:48:15.117015] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:42.152 [2024-12-01 14:48:15.117172] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:42.152 [2024-12-01 14:48:15.117186] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:07:42.152 14:48:15 -- accel/accel.sh@21 -- # val= 00:07:42.152 14:48:15 -- accel/accel.sh@22 -- # case "$var" in 00:07:42.152 14:48:15 -- accel/accel.sh@20 -- # IFS=: 00:07:42.152 14:48:15 -- accel/accel.sh@20 -- # read -r var val 00:07:42.152 14:48:15 -- accel/accel.sh@21 -- # val= 00:07:42.152 14:48:15 -- accel/accel.sh@22 -- # case "$var" in 00:07:42.152 14:48:15 -- accel/accel.sh@20 -- # IFS=: 00:07:42.152 14:48:15 -- accel/accel.sh@20 -- # read -r var val 00:07:42.152 14:48:15 -- accel/accel.sh@21 -- # val= 00:07:42.152 14:48:15 -- accel/accel.sh@22 -- # case "$var" in 00:07:42.152 14:48:15 -- accel/accel.sh@20 -- # IFS=: 00:07:42.152 14:48:15 -- accel/accel.sh@20 -- # read -r var val 00:07:42.152 14:48:15 -- accel/accel.sh@21 -- # val=0xf 00:07:42.152 14:48:15 -- accel/accel.sh@22 -- # case "$var" in 00:07:42.152 14:48:15 -- accel/accel.sh@20 -- # IFS=: 00:07:42.152 14:48:15 -- accel/accel.sh@20 -- # read -r var val 00:07:42.152 14:48:15 -- accel/accel.sh@21 -- # val= 00:07:42.152 14:48:15 -- accel/accel.sh@22 -- # case "$var" in 00:07:42.152 14:48:15 -- accel/accel.sh@20 -- # IFS=: 00:07:42.152 14:48:15 -- accel/accel.sh@20 -- # read -r var val 00:07:42.152 14:48:15 -- accel/accel.sh@21 -- # val= 00:07:42.152 14:48:15 -- accel/accel.sh@22 -- # case "$var" in 00:07:42.152 14:48:15 -- accel/accel.sh@20 -- # IFS=: 00:07:42.152 14:48:15 -- accel/accel.sh@20 -- # read -r var val 00:07:42.152 14:48:15 -- accel/accel.sh@21 -- # val=decompress 00:07:42.152 14:48:15 -- accel/accel.sh@22 -- # case "$var" in 00:07:42.153 14:48:15 -- accel/accel.sh@24 -- # accel_opc=decompress 00:07:42.153 14:48:15 -- accel/accel.sh@20 -- # IFS=: 00:07:42.153 14:48:15 -- accel/accel.sh@20 -- # read -r var val 00:07:42.153 14:48:15 -- accel/accel.sh@21 -- # val='111250 bytes' 00:07:42.153 14:48:15 -- accel/accel.sh@22 -- # case "$var" in 00:07:42.153 14:48:15 -- accel/accel.sh@20 -- # IFS=: 00:07:42.153 14:48:15 -- accel/accel.sh@20 -- # read -r var val 00:07:42.153 14:48:15 -- accel/accel.sh@21 -- # val= 00:07:42.153 14:48:15 -- accel/accel.sh@22 -- # case "$var" in 00:07:42.153 14:48:15 -- accel/accel.sh@20 -- # IFS=: 00:07:42.153 14:48:15 -- accel/accel.sh@20 -- # read -r var val 00:07:42.153 14:48:15 -- accel/accel.sh@21 -- # val=software 00:07:42.153 14:48:15 -- accel/accel.sh@22 -- # case "$var" in 00:07:42.153 14:48:15 -- accel/accel.sh@23 -- # accel_module=software 00:07:42.153 14:48:15 -- accel/accel.sh@20 -- # IFS=: 00:07:42.153 14:48:15 -- accel/accel.sh@20 -- # read -r var val 00:07:42.153 14:48:15 -- accel/accel.sh@21 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:42.153 14:48:15 -- accel/accel.sh@22 -- # case "$var" in 00:07:42.153 14:48:15 -- accel/accel.sh@20 -- # IFS=: 00:07:42.153 14:48:15 -- accel/accel.sh@20 -- # read -r var val 00:07:42.153 14:48:15 -- accel/accel.sh@21 -- # val=32 00:07:42.153 14:48:15 -- accel/accel.sh@22 -- # case "$var" in 00:07:42.153 14:48:15 -- accel/accel.sh@20 -- # IFS=: 00:07:42.153 14:48:15 -- accel/accel.sh@20 -- # read -r var val 00:07:42.153 14:48:15 -- accel/accel.sh@21 -- # val=32 00:07:42.153 14:48:15 -- accel/accel.sh@22 -- # case "$var" in 00:07:42.153 14:48:15 -- accel/accel.sh@20 -- # IFS=: 00:07:42.153 14:48:15 -- accel/accel.sh@20 -- # read -r var val 00:07:42.153 14:48:15 -- accel/accel.sh@21 -- # val=1 00:07:42.153 14:48:15 -- accel/accel.sh@22 -- # case "$var" in 00:07:42.153 14:48:15 -- accel/accel.sh@20 -- # IFS=: 00:07:42.153 14:48:15 -- accel/accel.sh@20 -- # read -r var val 00:07:42.153 14:48:15 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:42.153 14:48:15 -- accel/accel.sh@22 -- # case "$var" in 00:07:42.153 14:48:15 -- accel/accel.sh@20 -- # IFS=: 00:07:42.153 14:48:15 -- accel/accel.sh@20 -- # read -r var val 00:07:42.153 14:48:15 -- accel/accel.sh@21 -- # val=Yes 00:07:42.153 14:48:15 -- accel/accel.sh@22 -- # case "$var" in 00:07:42.153 14:48:15 -- accel/accel.sh@20 -- # IFS=: 00:07:42.153 14:48:15 -- accel/accel.sh@20 -- # read -r var val 00:07:42.153 14:48:15 -- accel/accel.sh@21 -- # val= 00:07:42.153 14:48:15 -- accel/accel.sh@22 -- # case "$var" in 00:07:42.153 14:48:15 -- accel/accel.sh@20 -- # IFS=: 00:07:42.153 14:48:15 -- accel/accel.sh@20 -- # read -r var val 00:07:42.154 14:48:15 -- accel/accel.sh@21 -- # val= 00:07:42.154 14:48:15 -- accel/accel.sh@22 -- # case "$var" in 00:07:42.154 14:48:15 -- accel/accel.sh@20 -- # IFS=: 00:07:42.154 14:48:15 -- accel/accel.sh@20 -- # read -r var val 00:07:43.535 14:48:16 -- accel/accel.sh@21 -- # val= 00:07:43.535 14:48:16 -- accel/accel.sh@22 -- # case "$var" in 00:07:43.535 14:48:16 -- accel/accel.sh@20 -- # IFS=: 00:07:43.535 14:48:16 -- accel/accel.sh@20 -- # read -r var val 00:07:43.535 14:48:16 -- accel/accel.sh@21 -- # val= 00:07:43.535 14:48:16 -- accel/accel.sh@22 -- # case "$var" in 00:07:43.535 14:48:16 -- accel/accel.sh@20 -- # IFS=: 00:07:43.535 14:48:16 -- accel/accel.sh@20 -- # read -r var val 00:07:43.535 14:48:16 -- accel/accel.sh@21 -- # val= 00:07:43.535 14:48:16 -- accel/accel.sh@22 -- # case "$var" in 00:07:43.535 14:48:16 -- accel/accel.sh@20 -- # IFS=: 00:07:43.535 14:48:16 -- accel/accel.sh@20 -- # read -r var val 00:07:43.535 14:48:16 -- accel/accel.sh@21 -- # val= 00:07:43.535 14:48:16 -- accel/accel.sh@22 -- # case "$var" in 00:07:43.535 14:48:16 -- accel/accel.sh@20 -- # IFS=: 00:07:43.535 14:48:16 -- accel/accel.sh@20 -- # read -r var val 00:07:43.535 14:48:16 -- accel/accel.sh@21 -- # val= 00:07:43.535 14:48:16 -- accel/accel.sh@22 -- # case "$var" in 00:07:43.535 14:48:16 -- accel/accel.sh@20 -- # IFS=: 00:07:43.535 14:48:16 -- accel/accel.sh@20 -- # read -r var val 00:07:43.535 14:48:16 -- accel/accel.sh@21 -- # val= 00:07:43.535 14:48:16 -- accel/accel.sh@22 -- # case "$var" in 00:07:43.535 14:48:16 -- accel/accel.sh@20 -- # IFS=: 00:07:43.535 14:48:16 -- accel/accel.sh@20 -- # read -r var val 00:07:43.535 14:48:16 -- accel/accel.sh@21 -- # val= 00:07:43.535 14:48:16 -- accel/accel.sh@22 -- # case "$var" in 00:07:43.535 14:48:16 -- accel/accel.sh@20 -- # IFS=: 00:07:43.535 14:48:16 -- accel/accel.sh@20 -- # read -r var val 00:07:43.535 14:48:16 -- accel/accel.sh@21 -- # val= 00:07:43.535 14:48:16 -- accel/accel.sh@22 -- # case "$var" in 00:07:43.535 14:48:16 -- accel/accel.sh@20 -- # IFS=: 00:07:43.535 14:48:16 -- accel/accel.sh@20 -- # read -r var val 00:07:43.535 14:48:16 -- accel/accel.sh@21 -- # val= 00:07:43.535 14:48:16 -- accel/accel.sh@22 -- # case "$var" in 00:07:43.535 14:48:16 -- accel/accel.sh@20 -- # IFS=: 00:07:43.535 14:48:16 -- accel/accel.sh@20 -- # read -r var val 00:07:43.535 14:48:16 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:43.535 14:48:16 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:07:43.535 14:48:16 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:43.535 00:07:43.535 real 0m2.813s 00:07:43.535 user 0m9.196s 00:07:43.535 sys 0m0.234s 00:07:43.535 14:48:16 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:43.535 14:48:16 -- common/autotest_common.sh@10 -- # set +x 00:07:43.535 ************************************ 00:07:43.535 END TEST accel_decomp_full_mcore 00:07:43.535 ************************************ 00:07:43.535 14:48:16 -- accel/accel.sh@113 -- # run_test accel_decomp_mthread accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:07:43.535 14:48:16 -- common/autotest_common.sh@1087 -- # '[' 11 -le 1 ']' 00:07:43.535 14:48:16 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:43.535 14:48:16 -- common/autotest_common.sh@10 -- # set +x 00:07:43.535 ************************************ 00:07:43.535 START TEST accel_decomp_mthread 00:07:43.535 ************************************ 00:07:43.535 14:48:16 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:07:43.535 14:48:16 -- accel/accel.sh@16 -- # local accel_opc 00:07:43.535 14:48:16 -- accel/accel.sh@17 -- # local accel_module 00:07:43.535 14:48:16 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:07:43.535 14:48:16 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:07:43.535 14:48:16 -- accel/accel.sh@12 -- # build_accel_config 00:07:43.535 14:48:16 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:43.535 14:48:16 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:43.535 14:48:16 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:43.535 14:48:16 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:43.535 14:48:16 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:43.535 14:48:16 -- accel/accel.sh@41 -- # local IFS=, 00:07:43.535 14:48:16 -- accel/accel.sh@42 -- # jq -r . 00:07:43.535 [2024-12-01 14:48:16.391228] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:43.535 [2024-12-01 14:48:16.391632] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71470 ] 00:07:43.535 [2024-12-01 14:48:16.521974] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:43.535 [2024-12-01 14:48:16.571297] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:44.912 14:48:17 -- accel/accel.sh@18 -- # out='Preparing input file... 00:07:44.912 00:07:44.912 SPDK Configuration: 00:07:44.912 Core mask: 0x1 00:07:44.912 00:07:44.912 Accel Perf Configuration: 00:07:44.912 Workload Type: decompress 00:07:44.912 Transfer size: 4096 bytes 00:07:44.912 Vector count 1 00:07:44.912 Module: software 00:07:44.912 File Name: /home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:44.912 Queue depth: 32 00:07:44.912 Allocate depth: 32 00:07:44.912 # threads/core: 2 00:07:44.912 Run time: 1 seconds 00:07:44.912 Verify: Yes 00:07:44.912 00:07:44.912 Running for 1 seconds... 00:07:44.912 00:07:44.912 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:44.912 ------------------------------------------------------------------------------------ 00:07:44.912 0,1 43104/s 79 MiB/s 0 0 00:07:44.912 0,0 42976/s 79 MiB/s 0 0 00:07:44.912 ==================================================================================== 00:07:44.912 Total 86080/s 336 MiB/s 0 0' 00:07:44.912 14:48:17 -- accel/accel.sh@20 -- # IFS=: 00:07:44.912 14:48:17 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:07:44.912 14:48:17 -- accel/accel.sh@20 -- # read -r var val 00:07:44.912 14:48:17 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:07:44.912 14:48:17 -- accel/accel.sh@12 -- # build_accel_config 00:07:44.912 14:48:17 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:44.912 14:48:17 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:44.912 14:48:17 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:44.912 14:48:17 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:44.912 14:48:17 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:44.912 14:48:17 -- accel/accel.sh@41 -- # local IFS=, 00:07:44.912 14:48:17 -- accel/accel.sh@42 -- # jq -r . 00:07:44.912 [2024-12-01 14:48:17.787459] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:44.912 [2024-12-01 14:48:17.787533] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71484 ] 00:07:44.912 [2024-12-01 14:48:17.917779] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:44.912 [2024-12-01 14:48:17.967587] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:45.171 14:48:18 -- accel/accel.sh@21 -- # val= 00:07:45.171 14:48:18 -- accel/accel.sh@22 -- # case "$var" in 00:07:45.171 14:48:18 -- accel/accel.sh@20 -- # IFS=: 00:07:45.171 14:48:18 -- accel/accel.sh@20 -- # read -r var val 00:07:45.171 14:48:18 -- accel/accel.sh@21 -- # val= 00:07:45.171 14:48:18 -- accel/accel.sh@22 -- # case "$var" in 00:07:45.171 14:48:18 -- accel/accel.sh@20 -- # IFS=: 00:07:45.171 14:48:18 -- accel/accel.sh@20 -- # read -r var val 00:07:45.171 14:48:18 -- accel/accel.sh@21 -- # val= 00:07:45.171 14:48:18 -- accel/accel.sh@22 -- # case "$var" in 00:07:45.171 14:48:18 -- accel/accel.sh@20 -- # IFS=: 00:07:45.171 14:48:18 -- accel/accel.sh@20 -- # read -r var val 00:07:45.171 14:48:18 -- accel/accel.sh@21 -- # val=0x1 00:07:45.171 14:48:18 -- accel/accel.sh@22 -- # case "$var" in 00:07:45.171 14:48:18 -- accel/accel.sh@20 -- # IFS=: 00:07:45.171 14:48:18 -- accel/accel.sh@20 -- # read -r var val 00:07:45.171 14:48:18 -- accel/accel.sh@21 -- # val= 00:07:45.171 14:48:18 -- accel/accel.sh@22 -- # case "$var" in 00:07:45.171 14:48:18 -- accel/accel.sh@20 -- # IFS=: 00:07:45.171 14:48:18 -- accel/accel.sh@20 -- # read -r var val 00:07:45.171 14:48:18 -- accel/accel.sh@21 -- # val= 00:07:45.171 14:48:18 -- accel/accel.sh@22 -- # case "$var" in 00:07:45.171 14:48:18 -- accel/accel.sh@20 -- # IFS=: 00:07:45.171 14:48:18 -- accel/accel.sh@20 -- # read -r var val 00:07:45.171 14:48:18 -- accel/accel.sh@21 -- # val=decompress 00:07:45.171 14:48:18 -- accel/accel.sh@22 -- # case "$var" in 00:07:45.171 14:48:18 -- accel/accel.sh@24 -- # accel_opc=decompress 00:07:45.171 14:48:18 -- accel/accel.sh@20 -- # IFS=: 00:07:45.171 14:48:18 -- accel/accel.sh@20 -- # read -r var val 00:07:45.171 14:48:18 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:45.171 14:48:18 -- accel/accel.sh@22 -- # case "$var" in 00:07:45.171 14:48:18 -- accel/accel.sh@20 -- # IFS=: 00:07:45.171 14:48:18 -- accel/accel.sh@20 -- # read -r var val 00:07:45.171 14:48:18 -- accel/accel.sh@21 -- # val= 00:07:45.171 14:48:18 -- accel/accel.sh@22 -- # case "$var" in 00:07:45.171 14:48:18 -- accel/accel.sh@20 -- # IFS=: 00:07:45.171 14:48:18 -- accel/accel.sh@20 -- # read -r var val 00:07:45.171 14:48:18 -- accel/accel.sh@21 -- # val=software 00:07:45.171 14:48:18 -- accel/accel.sh@22 -- # case "$var" in 00:07:45.171 14:48:18 -- accel/accel.sh@23 -- # accel_module=software 00:07:45.171 14:48:18 -- accel/accel.sh@20 -- # IFS=: 00:07:45.171 14:48:18 -- accel/accel.sh@20 -- # read -r var val 00:07:45.171 14:48:18 -- accel/accel.sh@21 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:45.171 14:48:18 -- accel/accel.sh@22 -- # case "$var" in 00:07:45.171 14:48:18 -- accel/accel.sh@20 -- # IFS=: 00:07:45.171 14:48:18 -- accel/accel.sh@20 -- # read -r var val 00:07:45.171 14:48:18 -- accel/accel.sh@21 -- # val=32 00:07:45.171 14:48:18 -- accel/accel.sh@22 -- # case "$var" in 00:07:45.171 14:48:18 -- accel/accel.sh@20 -- # IFS=: 00:07:45.171 14:48:18 -- accel/accel.sh@20 -- # read -r var val 00:07:45.171 14:48:18 -- accel/accel.sh@21 -- # val=32 00:07:45.171 14:48:18 -- accel/accel.sh@22 -- # case "$var" in 00:07:45.171 14:48:18 -- accel/accel.sh@20 -- # IFS=: 00:07:45.171 14:48:18 -- accel/accel.sh@20 -- # read -r var val 00:07:45.171 14:48:18 -- accel/accel.sh@21 -- # val=2 00:07:45.171 14:48:18 -- accel/accel.sh@22 -- # case "$var" in 00:07:45.171 14:48:18 -- accel/accel.sh@20 -- # IFS=: 00:07:45.171 14:48:18 -- accel/accel.sh@20 -- # read -r var val 00:07:45.171 14:48:18 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:45.171 14:48:18 -- accel/accel.sh@22 -- # case "$var" in 00:07:45.171 14:48:18 -- accel/accel.sh@20 -- # IFS=: 00:07:45.171 14:48:18 -- accel/accel.sh@20 -- # read -r var val 00:07:45.171 14:48:18 -- accel/accel.sh@21 -- # val=Yes 00:07:45.171 14:48:18 -- accel/accel.sh@22 -- # case "$var" in 00:07:45.171 14:48:18 -- accel/accel.sh@20 -- # IFS=: 00:07:45.171 14:48:18 -- accel/accel.sh@20 -- # read -r var val 00:07:45.171 14:48:18 -- accel/accel.sh@21 -- # val= 00:07:45.171 14:48:18 -- accel/accel.sh@22 -- # case "$var" in 00:07:45.171 14:48:18 -- accel/accel.sh@20 -- # IFS=: 00:07:45.171 14:48:18 -- accel/accel.sh@20 -- # read -r var val 00:07:45.171 14:48:18 -- accel/accel.sh@21 -- # val= 00:07:45.171 14:48:18 -- accel/accel.sh@22 -- # case "$var" in 00:07:45.171 14:48:18 -- accel/accel.sh@20 -- # IFS=: 00:07:45.171 14:48:18 -- accel/accel.sh@20 -- # read -r var val 00:07:46.109 14:48:19 -- accel/accel.sh@21 -- # val= 00:07:46.109 14:48:19 -- accel/accel.sh@22 -- # case "$var" in 00:07:46.109 14:48:19 -- accel/accel.sh@20 -- # IFS=: 00:07:46.109 14:48:19 -- accel/accel.sh@20 -- # read -r var val 00:07:46.109 14:48:19 -- accel/accel.sh@21 -- # val= 00:07:46.109 14:48:19 -- accel/accel.sh@22 -- # case "$var" in 00:07:46.109 14:48:19 -- accel/accel.sh@20 -- # IFS=: 00:07:46.109 14:48:19 -- accel/accel.sh@20 -- # read -r var val 00:07:46.109 14:48:19 -- accel/accel.sh@21 -- # val= 00:07:46.109 14:48:19 -- accel/accel.sh@22 -- # case "$var" in 00:07:46.109 14:48:19 -- accel/accel.sh@20 -- # IFS=: 00:07:46.109 14:48:19 -- accel/accel.sh@20 -- # read -r var val 00:07:46.109 14:48:19 -- accel/accel.sh@21 -- # val= 00:07:46.109 14:48:19 -- accel/accel.sh@22 -- # case "$var" in 00:07:46.109 14:48:19 -- accel/accel.sh@20 -- # IFS=: 00:07:46.109 14:48:19 -- accel/accel.sh@20 -- # read -r var val 00:07:46.109 14:48:19 -- accel/accel.sh@21 -- # val= 00:07:46.109 14:48:19 -- accel/accel.sh@22 -- # case "$var" in 00:07:46.109 14:48:19 -- accel/accel.sh@20 -- # IFS=: 00:07:46.109 14:48:19 -- accel/accel.sh@20 -- # read -r var val 00:07:46.109 14:48:19 -- accel/accel.sh@21 -- # val= 00:07:46.109 14:48:19 -- accel/accel.sh@22 -- # case "$var" in 00:07:46.109 14:48:19 -- accel/accel.sh@20 -- # IFS=: 00:07:46.109 14:48:19 -- accel/accel.sh@20 -- # read -r var val 00:07:46.109 14:48:19 -- accel/accel.sh@21 -- # val= 00:07:46.109 14:48:19 -- accel/accel.sh@22 -- # case "$var" in 00:07:46.109 14:48:19 -- accel/accel.sh@20 -- # IFS=: 00:07:46.109 14:48:19 -- accel/accel.sh@20 -- # read -r var val 00:07:46.109 14:48:19 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:46.109 14:48:19 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:07:46.109 14:48:19 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:46.109 00:07:46.109 real 0m2.785s 00:07:46.109 user 0m2.374s 00:07:46.109 sys 0m0.210s 00:07:46.109 14:48:19 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:46.109 14:48:19 -- common/autotest_common.sh@10 -- # set +x 00:07:46.109 ************************************ 00:07:46.109 END TEST accel_decomp_mthread 00:07:46.109 ************************************ 00:07:46.109 14:48:19 -- accel/accel.sh@114 -- # run_test accel_deomp_full_mthread accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:07:46.109 14:48:19 -- common/autotest_common.sh@1087 -- # '[' 13 -le 1 ']' 00:07:46.109 14:48:19 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:46.109 14:48:19 -- common/autotest_common.sh@10 -- # set +x 00:07:46.109 ************************************ 00:07:46.109 START TEST accel_deomp_full_mthread 00:07:46.109 ************************************ 00:07:46.109 14:48:19 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:07:46.109 14:48:19 -- accel/accel.sh@16 -- # local accel_opc 00:07:46.109 14:48:19 -- accel/accel.sh@17 -- # local accel_module 00:07:46.109 14:48:19 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:07:46.109 14:48:19 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:07:46.109 14:48:19 -- accel/accel.sh@12 -- # build_accel_config 00:07:46.109 14:48:19 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:46.109 14:48:19 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:46.109 14:48:19 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:46.109 14:48:19 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:46.109 14:48:19 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:46.109 14:48:19 -- accel/accel.sh@41 -- # local IFS=, 00:07:46.109 14:48:19 -- accel/accel.sh@42 -- # jq -r . 00:07:46.368 [2024-12-01 14:48:19.237883] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:46.368 [2024-12-01 14:48:19.237980] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71519 ] 00:07:46.368 [2024-12-01 14:48:19.374466] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:46.368 [2024-12-01 14:48:19.425482] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:47.746 14:48:20 -- accel/accel.sh@18 -- # out='Preparing input file... 00:07:47.746 00:07:47.746 SPDK Configuration: 00:07:47.746 Core mask: 0x1 00:07:47.746 00:07:47.746 Accel Perf Configuration: 00:07:47.746 Workload Type: decompress 00:07:47.746 Transfer size: 111250 bytes 00:07:47.746 Vector count 1 00:07:47.746 Module: software 00:07:47.746 File Name: /home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:47.746 Queue depth: 32 00:07:47.746 Allocate depth: 32 00:07:47.746 # threads/core: 2 00:07:47.746 Run time: 1 seconds 00:07:47.746 Verify: Yes 00:07:47.746 00:07:47.746 Running for 1 seconds... 00:07:47.746 00:07:47.746 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:47.746 ------------------------------------------------------------------------------------ 00:07:47.746 0,1 2912/s 120 MiB/s 0 0 00:07:47.746 0,0 2880/s 118 MiB/s 0 0 00:07:47.746 ==================================================================================== 00:07:47.746 Total 5792/s 614 MiB/s 0 0' 00:07:47.746 14:48:20 -- accel/accel.sh@20 -- # IFS=: 00:07:47.746 14:48:20 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:07:47.746 14:48:20 -- accel/accel.sh@20 -- # read -r var val 00:07:47.746 14:48:20 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:07:47.746 14:48:20 -- accel/accel.sh@12 -- # build_accel_config 00:07:47.746 14:48:20 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:47.746 14:48:20 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:47.746 14:48:20 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:47.746 14:48:20 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:47.746 14:48:20 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:47.746 14:48:20 -- accel/accel.sh@41 -- # local IFS=, 00:07:47.746 14:48:20 -- accel/accel.sh@42 -- # jq -r . 00:07:47.746 [2024-12-01 14:48:20.646691] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:47.746 [2024-12-01 14:48:20.646782] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71538 ] 00:07:47.746 [2024-12-01 14:48:20.775310] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:47.746 [2024-12-01 14:48:20.824293] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:48.005 14:48:20 -- accel/accel.sh@21 -- # val= 00:07:48.005 14:48:20 -- accel/accel.sh@22 -- # case "$var" in 00:07:48.005 14:48:20 -- accel/accel.sh@20 -- # IFS=: 00:07:48.005 14:48:20 -- accel/accel.sh@20 -- # read -r var val 00:07:48.005 14:48:20 -- accel/accel.sh@21 -- # val= 00:07:48.005 14:48:20 -- accel/accel.sh@22 -- # case "$var" in 00:07:48.005 14:48:20 -- accel/accel.sh@20 -- # IFS=: 00:07:48.005 14:48:20 -- accel/accel.sh@20 -- # read -r var val 00:07:48.005 14:48:20 -- accel/accel.sh@21 -- # val= 00:07:48.005 14:48:20 -- accel/accel.sh@22 -- # case "$var" in 00:07:48.005 14:48:20 -- accel/accel.sh@20 -- # IFS=: 00:07:48.005 14:48:20 -- accel/accel.sh@20 -- # read -r var val 00:07:48.005 14:48:20 -- accel/accel.sh@21 -- # val=0x1 00:07:48.005 14:48:20 -- accel/accel.sh@22 -- # case "$var" in 00:07:48.005 14:48:20 -- accel/accel.sh@20 -- # IFS=: 00:07:48.005 14:48:20 -- accel/accel.sh@20 -- # read -r var val 00:07:48.005 14:48:20 -- accel/accel.sh@21 -- # val= 00:07:48.005 14:48:20 -- accel/accel.sh@22 -- # case "$var" in 00:07:48.005 14:48:20 -- accel/accel.sh@20 -- # IFS=: 00:07:48.005 14:48:20 -- accel/accel.sh@20 -- # read -r var val 00:07:48.005 14:48:20 -- accel/accel.sh@21 -- # val= 00:07:48.005 14:48:20 -- accel/accel.sh@22 -- # case "$var" in 00:07:48.005 14:48:20 -- accel/accel.sh@20 -- # IFS=: 00:07:48.005 14:48:20 -- accel/accel.sh@20 -- # read -r var val 00:07:48.005 14:48:20 -- accel/accel.sh@21 -- # val=decompress 00:07:48.005 14:48:20 -- accel/accel.sh@22 -- # case "$var" in 00:07:48.006 14:48:20 -- accel/accel.sh@24 -- # accel_opc=decompress 00:07:48.006 14:48:20 -- accel/accel.sh@20 -- # IFS=: 00:07:48.006 14:48:20 -- accel/accel.sh@20 -- # read -r var val 00:07:48.006 14:48:20 -- accel/accel.sh@21 -- # val='111250 bytes' 00:07:48.006 14:48:20 -- accel/accel.sh@22 -- # case "$var" in 00:07:48.006 14:48:20 -- accel/accel.sh@20 -- # IFS=: 00:07:48.006 14:48:20 -- accel/accel.sh@20 -- # read -r var val 00:07:48.006 14:48:20 -- accel/accel.sh@21 -- # val= 00:07:48.006 14:48:20 -- accel/accel.sh@22 -- # case "$var" in 00:07:48.006 14:48:20 -- accel/accel.sh@20 -- # IFS=: 00:07:48.006 14:48:20 -- accel/accel.sh@20 -- # read -r var val 00:07:48.006 14:48:20 -- accel/accel.sh@21 -- # val=software 00:07:48.006 14:48:20 -- accel/accel.sh@22 -- # case "$var" in 00:07:48.006 14:48:20 -- accel/accel.sh@23 -- # accel_module=software 00:07:48.006 14:48:20 -- accel/accel.sh@20 -- # IFS=: 00:07:48.006 14:48:20 -- accel/accel.sh@20 -- # read -r var val 00:07:48.006 14:48:20 -- accel/accel.sh@21 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:48.006 14:48:20 -- accel/accel.sh@22 -- # case "$var" in 00:07:48.006 14:48:20 -- accel/accel.sh@20 -- # IFS=: 00:07:48.006 14:48:20 -- accel/accel.sh@20 -- # read -r var val 00:07:48.006 14:48:20 -- accel/accel.sh@21 -- # val=32 00:07:48.006 14:48:20 -- accel/accel.sh@22 -- # case "$var" in 00:07:48.006 14:48:20 -- accel/accel.sh@20 -- # IFS=: 00:07:48.006 14:48:20 -- accel/accel.sh@20 -- # read -r var val 00:07:48.006 14:48:20 -- accel/accel.sh@21 -- # val=32 00:07:48.006 14:48:20 -- accel/accel.sh@22 -- # case "$var" in 00:07:48.006 14:48:20 -- accel/accel.sh@20 -- # IFS=: 00:07:48.006 14:48:20 -- accel/accel.sh@20 -- # read -r var val 00:07:48.006 14:48:20 -- accel/accel.sh@21 -- # val=2 00:07:48.006 14:48:20 -- accel/accel.sh@22 -- # case "$var" in 00:07:48.006 14:48:20 -- accel/accel.sh@20 -- # IFS=: 00:07:48.006 14:48:20 -- accel/accel.sh@20 -- # read -r var val 00:07:48.006 14:48:20 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:48.006 14:48:20 -- accel/accel.sh@22 -- # case "$var" in 00:07:48.006 14:48:20 -- accel/accel.sh@20 -- # IFS=: 00:07:48.006 14:48:20 -- accel/accel.sh@20 -- # read -r var val 00:07:48.006 14:48:20 -- accel/accel.sh@21 -- # val=Yes 00:07:48.006 14:48:20 -- accel/accel.sh@22 -- # case "$var" in 00:07:48.006 14:48:20 -- accel/accel.sh@20 -- # IFS=: 00:07:48.006 14:48:20 -- accel/accel.sh@20 -- # read -r var val 00:07:48.006 14:48:20 -- accel/accel.sh@21 -- # val= 00:07:48.006 14:48:20 -- accel/accel.sh@22 -- # case "$var" in 00:07:48.006 14:48:20 -- accel/accel.sh@20 -- # IFS=: 00:07:48.006 14:48:20 -- accel/accel.sh@20 -- # read -r var val 00:07:48.006 14:48:20 -- accel/accel.sh@21 -- # val= 00:07:48.006 14:48:20 -- accel/accel.sh@22 -- # case "$var" in 00:07:48.006 14:48:20 -- accel/accel.sh@20 -- # IFS=: 00:07:48.006 14:48:20 -- accel/accel.sh@20 -- # read -r var val 00:07:48.941 14:48:22 -- accel/accel.sh@21 -- # val= 00:07:48.941 14:48:22 -- accel/accel.sh@22 -- # case "$var" in 00:07:48.941 14:48:22 -- accel/accel.sh@20 -- # IFS=: 00:07:48.941 14:48:22 -- accel/accel.sh@20 -- # read -r var val 00:07:48.941 14:48:22 -- accel/accel.sh@21 -- # val= 00:07:48.941 14:48:22 -- accel/accel.sh@22 -- # case "$var" in 00:07:48.941 14:48:22 -- accel/accel.sh@20 -- # IFS=: 00:07:48.941 14:48:22 -- accel/accel.sh@20 -- # read -r var val 00:07:48.941 14:48:22 -- accel/accel.sh@21 -- # val= 00:07:48.941 14:48:22 -- accel/accel.sh@22 -- # case "$var" in 00:07:48.941 14:48:22 -- accel/accel.sh@20 -- # IFS=: 00:07:48.941 14:48:22 -- accel/accel.sh@20 -- # read -r var val 00:07:48.941 14:48:22 -- accel/accel.sh@21 -- # val= 00:07:48.941 14:48:22 -- accel/accel.sh@22 -- # case "$var" in 00:07:48.941 14:48:22 -- accel/accel.sh@20 -- # IFS=: 00:07:48.941 14:48:22 -- accel/accel.sh@20 -- # read -r var val 00:07:48.941 14:48:22 -- accel/accel.sh@21 -- # val= 00:07:48.941 14:48:22 -- accel/accel.sh@22 -- # case "$var" in 00:07:48.941 14:48:22 -- accel/accel.sh@20 -- # IFS=: 00:07:48.941 14:48:22 -- accel/accel.sh@20 -- # read -r var val 00:07:48.941 14:48:22 -- accel/accel.sh@21 -- # val= 00:07:48.941 14:48:22 -- accel/accel.sh@22 -- # case "$var" in 00:07:48.941 14:48:22 -- accel/accel.sh@20 -- # IFS=: 00:07:48.941 14:48:22 -- accel/accel.sh@20 -- # read -r var val 00:07:48.941 14:48:22 -- accel/accel.sh@21 -- # val= 00:07:48.941 14:48:22 -- accel/accel.sh@22 -- # case "$var" in 00:07:48.941 14:48:22 -- accel/accel.sh@20 -- # IFS=: 00:07:48.941 14:48:22 -- accel/accel.sh@20 -- # read -r var val 00:07:48.941 14:48:22 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:48.941 ************************************ 00:07:48.941 END TEST accel_deomp_full_mthread 00:07:48.941 ************************************ 00:07:48.941 14:48:22 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:07:48.941 14:48:22 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:48.941 00:07:48.941 real 0m2.820s 00:07:48.941 user 0m2.415s 00:07:48.941 sys 0m0.204s 00:07:48.941 14:48:22 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:48.941 14:48:22 -- common/autotest_common.sh@10 -- # set +x 00:07:49.200 14:48:22 -- accel/accel.sh@116 -- # [[ n == y ]] 00:07:49.200 14:48:22 -- accel/accel.sh@129 -- # run_test accel_dif_functional_tests /home/vagrant/spdk_repo/spdk/test/accel/dif/dif -c /dev/fd/62 00:07:49.200 14:48:22 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:07:49.200 14:48:22 -- accel/accel.sh@129 -- # build_accel_config 00:07:49.200 14:48:22 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:49.200 14:48:22 -- common/autotest_common.sh@10 -- # set +x 00:07:49.200 14:48:22 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:49.200 14:48:22 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:49.200 14:48:22 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:49.200 14:48:22 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:49.200 14:48:22 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:49.200 14:48:22 -- accel/accel.sh@41 -- # local IFS=, 00:07:49.200 14:48:22 -- accel/accel.sh@42 -- # jq -r . 00:07:49.200 ************************************ 00:07:49.200 START TEST accel_dif_functional_tests 00:07:49.200 ************************************ 00:07:49.200 14:48:22 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/accel/dif/dif -c /dev/fd/62 00:07:49.200 [2024-12-01 14:48:22.136517] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:49.200 [2024-12-01 14:48:22.136617] [ DPDK EAL parameters: DIF --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71574 ] 00:07:49.200 [2024-12-01 14:48:22.273162] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:49.459 [2024-12-01 14:48:22.330012] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:49.459 [2024-12-01 14:48:22.330104] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:49.459 [2024-12-01 14:48:22.330111] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:49.459 00:07:49.459 00:07:49.459 CUnit - A unit testing framework for C - Version 2.1-3 00:07:49.459 http://cunit.sourceforge.net/ 00:07:49.459 00:07:49.459 00:07:49.459 Suite: accel_dif 00:07:49.459 Test: verify: DIF generated, GUARD check ...passed 00:07:49.459 Test: verify: DIF generated, APPTAG check ...passed 00:07:49.459 Test: verify: DIF generated, REFTAG check ...passed 00:07:49.459 Test: verify: DIF not generated, GUARD check ...passed 00:07:49.459 Test: verify: DIF not generated, APPTAG check ...[2024-12-01 14:48:22.416367] dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:07:49.459 [2024-12-01 14:48:22.416421] dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:07:49.459 [2024-12-01 14:48:22.416474] dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:07:49.459 passed 00:07:49.459 Test: verify: DIF not generated, REFTAG check ...[2024-12-01 14:48:22.416504] dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:07:49.459 [2024-12-01 14:48:22.416530] dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:07:49.459 passed 00:07:49.459 Test: verify: APPTAG correct, APPTAG check ...passed 00:07:49.459 Test: verify: APPTAG incorrect, APPTAG check ...passed 00:07:49.459 Test: verify: APPTAG incorrect, no APPTAG check ...[2024-12-01 14:48:22.416557] dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:07:49.459 [2024-12-01 14:48:22.416710] dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=30, Expected=28, Actual=14 00:07:49.459 passed 00:07:49.459 Test: verify: REFTAG incorrect, REFTAG ignore ...passed 00:07:49.459 Test: verify: REFTAG_INIT correct, REFTAG check ...passed 00:07:49.459 Test: verify: REFTAG_INIT incorrect, REFTAG check ...passed 00:07:49.459 Test: generate copy: DIF generated, GUARD check ...passed 00:07:49.459 Test: generate copy: DIF generated, APTTAG check ...[2024-12-01 14:48:22.417135] dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=10 00:07:49.459 passed 00:07:49.459 Test: generate copy: DIF generated, REFTAG check ...passed 00:07:49.459 Test: generate copy: DIF generated, no GUARD check flag set ...passed 00:07:49.459 Test: generate copy: DIF generated, no APPTAG check flag set ...passed 00:07:49.459 Test: generate copy: DIF generated, no REFTAG check flag set ...passed 00:07:49.459 Test: generate copy: iovecs-len validate ...passed 00:07:49.459 Test: generate copy: buffer alignment validate ...[2024-12-01 14:48:22.417846] dif.c:1167:spdk_dif_generate_copy: *ERROR*: Size of bounce_iovs arrays are not valid or misaligned with block_size. 00:07:49.459 passed 00:07:49.459 00:07:49.459 Run Summary: Type Total Ran Passed Failed Inactive 00:07:49.459 suites 1 1 n/a 0 0 00:07:49.459 tests 20 20 20 0 0 00:07:49.459 asserts 204 204 204 0 n/a 00:07:49.459 00:07:49.459 Elapsed time = 0.005 seconds 00:07:49.718 00:07:49.718 real 0m0.501s 00:07:49.718 user 0m0.670s 00:07:49.718 sys 0m0.145s 00:07:49.718 14:48:22 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:49.718 ************************************ 00:07:49.718 END TEST accel_dif_functional_tests 00:07:49.718 14:48:22 -- common/autotest_common.sh@10 -- # set +x 00:07:49.718 ************************************ 00:07:49.718 ************************************ 00:07:49.718 END TEST accel 00:07:49.718 ************************************ 00:07:49.718 00:07:49.718 real 1m0.311s 00:07:49.718 user 1m4.598s 00:07:49.718 sys 0m5.965s 00:07:49.718 14:48:22 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:49.718 14:48:22 -- common/autotest_common.sh@10 -- # set +x 00:07:49.718 14:48:22 -- spdk/autotest.sh@177 -- # run_test accel_rpc /home/vagrant/spdk_repo/spdk/test/accel/accel_rpc.sh 00:07:49.718 14:48:22 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:07:49.718 14:48:22 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:49.718 14:48:22 -- common/autotest_common.sh@10 -- # set +x 00:07:49.718 ************************************ 00:07:49.718 START TEST accel_rpc 00:07:49.718 ************************************ 00:07:49.718 14:48:22 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/accel/accel_rpc.sh 00:07:49.718 * Looking for test storage... 00:07:49.718 * Found test storage at /home/vagrant/spdk_repo/spdk/test/accel 00:07:49.718 14:48:22 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:07:49.718 14:48:22 -- common/autotest_common.sh@1690 -- # lcov --version 00:07:49.718 14:48:22 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:07:49.976 14:48:22 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:07:49.976 14:48:22 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:07:49.976 14:48:22 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:07:49.976 14:48:22 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:07:49.976 14:48:22 -- scripts/common.sh@335 -- # IFS=.-: 00:07:49.976 14:48:22 -- scripts/common.sh@335 -- # read -ra ver1 00:07:49.976 14:48:22 -- scripts/common.sh@336 -- # IFS=.-: 00:07:49.976 14:48:22 -- scripts/common.sh@336 -- # read -ra ver2 00:07:49.976 14:48:22 -- scripts/common.sh@337 -- # local 'op=<' 00:07:49.976 14:48:22 -- scripts/common.sh@339 -- # ver1_l=2 00:07:49.976 14:48:22 -- scripts/common.sh@340 -- # ver2_l=1 00:07:49.976 14:48:22 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:07:49.976 14:48:22 -- scripts/common.sh@343 -- # case "$op" in 00:07:49.976 14:48:22 -- scripts/common.sh@344 -- # : 1 00:07:49.976 14:48:22 -- scripts/common.sh@363 -- # (( v = 0 )) 00:07:49.976 14:48:22 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:49.976 14:48:22 -- scripts/common.sh@364 -- # decimal 1 00:07:49.976 14:48:22 -- scripts/common.sh@352 -- # local d=1 00:07:49.976 14:48:22 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:49.976 14:48:22 -- scripts/common.sh@354 -- # echo 1 00:07:49.976 14:48:22 -- scripts/common.sh@364 -- # ver1[v]=1 00:07:49.976 14:48:22 -- scripts/common.sh@365 -- # decimal 2 00:07:49.976 14:48:22 -- scripts/common.sh@352 -- # local d=2 00:07:49.976 14:48:22 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:49.976 14:48:22 -- scripts/common.sh@354 -- # echo 2 00:07:49.976 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:49.976 14:48:22 -- scripts/common.sh@365 -- # ver2[v]=2 00:07:49.976 14:48:22 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:07:49.976 14:48:22 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:07:49.976 14:48:22 -- scripts/common.sh@367 -- # return 0 00:07:49.976 14:48:22 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:49.976 14:48:22 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:07:49.976 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:49.976 --rc genhtml_branch_coverage=1 00:07:49.976 --rc genhtml_function_coverage=1 00:07:49.976 --rc genhtml_legend=1 00:07:49.976 --rc geninfo_all_blocks=1 00:07:49.976 --rc geninfo_unexecuted_blocks=1 00:07:49.976 00:07:49.976 ' 00:07:49.976 14:48:22 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:07:49.976 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:49.976 --rc genhtml_branch_coverage=1 00:07:49.976 --rc genhtml_function_coverage=1 00:07:49.976 --rc genhtml_legend=1 00:07:49.976 --rc geninfo_all_blocks=1 00:07:49.976 --rc geninfo_unexecuted_blocks=1 00:07:49.976 00:07:49.976 ' 00:07:49.976 14:48:22 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:07:49.976 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:49.976 --rc genhtml_branch_coverage=1 00:07:49.976 --rc genhtml_function_coverage=1 00:07:49.976 --rc genhtml_legend=1 00:07:49.976 --rc geninfo_all_blocks=1 00:07:49.976 --rc geninfo_unexecuted_blocks=1 00:07:49.976 00:07:49.976 ' 00:07:49.976 14:48:22 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:07:49.976 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:49.976 --rc genhtml_branch_coverage=1 00:07:49.976 --rc genhtml_function_coverage=1 00:07:49.976 --rc genhtml_legend=1 00:07:49.976 --rc geninfo_all_blocks=1 00:07:49.976 --rc geninfo_unexecuted_blocks=1 00:07:49.976 00:07:49.976 ' 00:07:49.976 14:48:22 -- accel/accel_rpc.sh@11 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:07:49.976 14:48:22 -- accel/accel_rpc.sh@14 -- # spdk_tgt_pid=71645 00:07:49.976 14:48:22 -- accel/accel_rpc.sh@15 -- # waitforlisten 71645 00:07:49.976 14:48:22 -- accel/accel_rpc.sh@13 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --wait-for-rpc 00:07:49.976 14:48:22 -- common/autotest_common.sh@829 -- # '[' -z 71645 ']' 00:07:49.976 14:48:22 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:49.976 14:48:22 -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:49.976 14:48:22 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:49.976 14:48:22 -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:49.976 14:48:22 -- common/autotest_common.sh@10 -- # set +x 00:07:49.976 [2024-12-01 14:48:22.930796] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:49.976 [2024-12-01 14:48:22.931079] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71645 ] 00:07:49.976 [2024-12-01 14:48:23.070942] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:50.234 [2024-12-01 14:48:23.123263] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:07:50.234 [2024-12-01 14:48:23.123644] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:50.234 14:48:23 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:50.234 14:48:23 -- common/autotest_common.sh@862 -- # return 0 00:07:50.234 14:48:23 -- accel/accel_rpc.sh@45 -- # [[ y == y ]] 00:07:50.234 14:48:23 -- accel/accel_rpc.sh@45 -- # [[ 0 -gt 0 ]] 00:07:50.234 14:48:23 -- accel/accel_rpc.sh@49 -- # [[ y == y ]] 00:07:50.234 14:48:23 -- accel/accel_rpc.sh@49 -- # [[ 0 -gt 0 ]] 00:07:50.234 14:48:23 -- accel/accel_rpc.sh@53 -- # run_test accel_assign_opcode accel_assign_opcode_test_suite 00:07:50.234 14:48:23 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:07:50.234 14:48:23 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:50.234 14:48:23 -- common/autotest_common.sh@10 -- # set +x 00:07:50.234 ************************************ 00:07:50.234 START TEST accel_assign_opcode 00:07:50.234 ************************************ 00:07:50.234 14:48:23 -- common/autotest_common.sh@1114 -- # accel_assign_opcode_test_suite 00:07:50.234 14:48:23 -- accel/accel_rpc.sh@38 -- # rpc_cmd accel_assign_opc -o copy -m incorrect 00:07:50.234 14:48:23 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:50.234 14:48:23 -- common/autotest_common.sh@10 -- # set +x 00:07:50.234 [2024-12-01 14:48:23.200429] accel_rpc.c: 168:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module incorrect 00:07:50.234 14:48:23 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:50.234 14:48:23 -- accel/accel_rpc.sh@40 -- # rpc_cmd accel_assign_opc -o copy -m software 00:07:50.234 14:48:23 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:50.234 14:48:23 -- common/autotest_common.sh@10 -- # set +x 00:07:50.234 [2024-12-01 14:48:23.208423] accel_rpc.c: 168:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module software 00:07:50.234 14:48:23 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:50.234 14:48:23 -- accel/accel_rpc.sh@41 -- # rpc_cmd framework_start_init 00:07:50.234 14:48:23 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:50.234 14:48:23 -- common/autotest_common.sh@10 -- # set +x 00:07:50.492 14:48:23 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:50.492 14:48:23 -- accel/accel_rpc.sh@42 -- # rpc_cmd accel_get_opc_assignments 00:07:50.492 14:48:23 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:50.492 14:48:23 -- accel/accel_rpc.sh@42 -- # jq -r .copy 00:07:50.492 14:48:23 -- common/autotest_common.sh@10 -- # set +x 00:07:50.492 14:48:23 -- accel/accel_rpc.sh@42 -- # grep software 00:07:50.492 14:48:23 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:50.492 software 00:07:50.493 ************************************ 00:07:50.493 END TEST accel_assign_opcode 00:07:50.493 ************************************ 00:07:50.493 00:07:50.493 real 0m0.273s 00:07:50.493 user 0m0.055s 00:07:50.493 sys 0m0.012s 00:07:50.493 14:48:23 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:50.493 14:48:23 -- common/autotest_common.sh@10 -- # set +x 00:07:50.493 14:48:23 -- accel/accel_rpc.sh@55 -- # killprocess 71645 00:07:50.493 14:48:23 -- common/autotest_common.sh@936 -- # '[' -z 71645 ']' 00:07:50.493 14:48:23 -- common/autotest_common.sh@940 -- # kill -0 71645 00:07:50.493 14:48:23 -- common/autotest_common.sh@941 -- # uname 00:07:50.493 14:48:23 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:07:50.493 14:48:23 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 71645 00:07:50.493 killing process with pid 71645 00:07:50.493 14:48:23 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:07:50.493 14:48:23 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:07:50.493 14:48:23 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 71645' 00:07:50.493 14:48:23 -- common/autotest_common.sh@955 -- # kill 71645 00:07:50.493 14:48:23 -- common/autotest_common.sh@960 -- # wait 71645 00:07:51.062 00:07:51.062 real 0m1.224s 00:07:51.062 user 0m1.145s 00:07:51.062 sys 0m0.423s 00:07:51.062 14:48:23 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:51.062 ************************************ 00:07:51.062 END TEST accel_rpc 00:07:51.062 ************************************ 00:07:51.062 14:48:23 -- common/autotest_common.sh@10 -- # set +x 00:07:51.062 14:48:23 -- spdk/autotest.sh@178 -- # run_test app_cmdline /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:07:51.062 14:48:23 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:07:51.062 14:48:23 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:51.062 14:48:23 -- common/autotest_common.sh@10 -- # set +x 00:07:51.062 ************************************ 00:07:51.062 START TEST app_cmdline 00:07:51.062 ************************************ 00:07:51.062 14:48:23 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:07:51.062 * Looking for test storage... 00:07:51.062 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:07:51.062 14:48:24 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:07:51.062 14:48:24 -- common/autotest_common.sh@1690 -- # lcov --version 00:07:51.062 14:48:24 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:07:51.062 14:48:24 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:07:51.062 14:48:24 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:07:51.062 14:48:24 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:07:51.062 14:48:24 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:07:51.062 14:48:24 -- scripts/common.sh@335 -- # IFS=.-: 00:07:51.062 14:48:24 -- scripts/common.sh@335 -- # read -ra ver1 00:07:51.062 14:48:24 -- scripts/common.sh@336 -- # IFS=.-: 00:07:51.062 14:48:24 -- scripts/common.sh@336 -- # read -ra ver2 00:07:51.062 14:48:24 -- scripts/common.sh@337 -- # local 'op=<' 00:07:51.062 14:48:24 -- scripts/common.sh@339 -- # ver1_l=2 00:07:51.062 14:48:24 -- scripts/common.sh@340 -- # ver2_l=1 00:07:51.062 14:48:24 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:07:51.062 14:48:24 -- scripts/common.sh@343 -- # case "$op" in 00:07:51.062 14:48:24 -- scripts/common.sh@344 -- # : 1 00:07:51.062 14:48:24 -- scripts/common.sh@363 -- # (( v = 0 )) 00:07:51.062 14:48:24 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:51.062 14:48:24 -- scripts/common.sh@364 -- # decimal 1 00:07:51.062 14:48:24 -- scripts/common.sh@352 -- # local d=1 00:07:51.062 14:48:24 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:51.062 14:48:24 -- scripts/common.sh@354 -- # echo 1 00:07:51.062 14:48:24 -- scripts/common.sh@364 -- # ver1[v]=1 00:07:51.062 14:48:24 -- scripts/common.sh@365 -- # decimal 2 00:07:51.062 14:48:24 -- scripts/common.sh@352 -- # local d=2 00:07:51.062 14:48:24 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:51.062 14:48:24 -- scripts/common.sh@354 -- # echo 2 00:07:51.062 14:48:24 -- scripts/common.sh@365 -- # ver2[v]=2 00:07:51.062 14:48:24 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:07:51.062 14:48:24 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:07:51.062 14:48:24 -- scripts/common.sh@367 -- # return 0 00:07:51.062 14:48:24 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:51.062 14:48:24 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:07:51.062 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:51.062 --rc genhtml_branch_coverage=1 00:07:51.062 --rc genhtml_function_coverage=1 00:07:51.062 --rc genhtml_legend=1 00:07:51.062 --rc geninfo_all_blocks=1 00:07:51.062 --rc geninfo_unexecuted_blocks=1 00:07:51.062 00:07:51.062 ' 00:07:51.062 14:48:24 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:07:51.062 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:51.062 --rc genhtml_branch_coverage=1 00:07:51.062 --rc genhtml_function_coverage=1 00:07:51.062 --rc genhtml_legend=1 00:07:51.062 --rc geninfo_all_blocks=1 00:07:51.062 --rc geninfo_unexecuted_blocks=1 00:07:51.062 00:07:51.062 ' 00:07:51.062 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:51.062 14:48:24 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:07:51.062 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:51.062 --rc genhtml_branch_coverage=1 00:07:51.062 --rc genhtml_function_coverage=1 00:07:51.062 --rc genhtml_legend=1 00:07:51.062 --rc geninfo_all_blocks=1 00:07:51.062 --rc geninfo_unexecuted_blocks=1 00:07:51.062 00:07:51.062 ' 00:07:51.062 14:48:24 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:07:51.062 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:51.062 --rc genhtml_branch_coverage=1 00:07:51.062 --rc genhtml_function_coverage=1 00:07:51.062 --rc genhtml_legend=1 00:07:51.062 --rc geninfo_all_blocks=1 00:07:51.062 --rc geninfo_unexecuted_blocks=1 00:07:51.062 00:07:51.062 ' 00:07:51.062 14:48:24 -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:07:51.062 14:48:24 -- app/cmdline.sh@17 -- # spdk_tgt_pid=71750 00:07:51.062 14:48:24 -- app/cmdline.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:07:51.062 14:48:24 -- app/cmdline.sh@18 -- # waitforlisten 71750 00:07:51.062 14:48:24 -- common/autotest_common.sh@829 -- # '[' -z 71750 ']' 00:07:51.062 14:48:24 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:51.062 14:48:24 -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:51.062 14:48:24 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:51.062 14:48:24 -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:51.062 14:48:24 -- common/autotest_common.sh@10 -- # set +x 00:07:51.321 [2024-12-01 14:48:24.204385] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:51.322 [2024-12-01 14:48:24.204728] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71750 ] 00:07:51.322 [2024-12-01 14:48:24.342825] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:51.322 [2024-12-01 14:48:24.393175] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:07:51.322 [2024-12-01 14:48:24.393559] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:52.257 14:48:25 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:52.257 14:48:25 -- common/autotest_common.sh@862 -- # return 0 00:07:52.257 14:48:25 -- app/cmdline.sh@20 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py spdk_get_version 00:07:52.515 { 00:07:52.515 "fields": { 00:07:52.515 "commit": "c13c99a5e", 00:07:52.515 "major": 24, 00:07:52.515 "minor": 1, 00:07:52.515 "patch": 1, 00:07:52.515 "suffix": "-pre" 00:07:52.515 }, 00:07:52.515 "version": "SPDK v24.01.1-pre git sha1 c13c99a5e" 00:07:52.515 } 00:07:52.515 14:48:25 -- app/cmdline.sh@22 -- # expected_methods=() 00:07:52.515 14:48:25 -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:07:52.515 14:48:25 -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:07:52.515 14:48:25 -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:07:52.515 14:48:25 -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:07:52.515 14:48:25 -- app/cmdline.sh@26 -- # jq -r '.[]' 00:07:52.515 14:48:25 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:52.515 14:48:25 -- common/autotest_common.sh@10 -- # set +x 00:07:52.515 14:48:25 -- app/cmdline.sh@26 -- # sort 00:07:52.515 14:48:25 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:52.515 14:48:25 -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:07:52.515 14:48:25 -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:07:52.515 14:48:25 -- app/cmdline.sh@30 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:52.515 14:48:25 -- common/autotest_common.sh@650 -- # local es=0 00:07:52.515 14:48:25 -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:52.515 14:48:25 -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:52.515 14:48:25 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:52.515 14:48:25 -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:52.515 14:48:25 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:52.515 14:48:25 -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:52.515 14:48:25 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:52.515 14:48:25 -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:52.515 14:48:25 -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:07:52.515 14:48:25 -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:52.774 2024/12/01 14:48:25 error on JSON-RPC call, method: env_dpdk_get_mem_stats, params: map[], err: error received for env_dpdk_get_mem_stats method, err: Code=-32601 Msg=Method not found 00:07:52.774 request: 00:07:52.774 { 00:07:52.774 "method": "env_dpdk_get_mem_stats", 00:07:52.774 "params": {} 00:07:52.774 } 00:07:52.774 Got JSON-RPC error response 00:07:52.774 GoRPCClient: error on JSON-RPC call 00:07:52.774 14:48:25 -- common/autotest_common.sh@653 -- # es=1 00:07:52.774 14:48:25 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:52.774 14:48:25 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:07:52.774 14:48:25 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:52.774 14:48:25 -- app/cmdline.sh@1 -- # killprocess 71750 00:07:52.774 14:48:25 -- common/autotest_common.sh@936 -- # '[' -z 71750 ']' 00:07:52.774 14:48:25 -- common/autotest_common.sh@940 -- # kill -0 71750 00:07:52.774 14:48:25 -- common/autotest_common.sh@941 -- # uname 00:07:52.774 14:48:25 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:07:52.774 14:48:25 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 71750 00:07:52.774 14:48:25 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:07:52.774 14:48:25 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:07:52.774 killing process with pid 71750 00:07:52.774 14:48:25 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 71750' 00:07:52.774 14:48:25 -- common/autotest_common.sh@955 -- # kill 71750 00:07:52.774 14:48:25 -- common/autotest_common.sh@960 -- # wait 71750 00:07:53.033 00:07:53.033 real 0m2.128s 00:07:53.033 user 0m2.611s 00:07:53.033 sys 0m0.483s 00:07:53.033 14:48:26 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:53.033 14:48:26 -- common/autotest_common.sh@10 -- # set +x 00:07:53.033 ************************************ 00:07:53.033 END TEST app_cmdline 00:07:53.033 ************************************ 00:07:53.033 14:48:26 -- spdk/autotest.sh@179 -- # run_test version /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:07:53.033 14:48:26 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:07:53.033 14:48:26 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:53.033 14:48:26 -- common/autotest_common.sh@10 -- # set +x 00:07:53.033 ************************************ 00:07:53.033 START TEST version 00:07:53.033 ************************************ 00:07:53.033 14:48:26 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:07:53.295 * Looking for test storage... 00:07:53.295 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:07:53.295 14:48:26 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:07:53.295 14:48:26 -- common/autotest_common.sh@1690 -- # lcov --version 00:07:53.295 14:48:26 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:07:53.295 14:48:26 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:07:53.295 14:48:26 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:07:53.295 14:48:26 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:07:53.295 14:48:26 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:07:53.295 14:48:26 -- scripts/common.sh@335 -- # IFS=.-: 00:07:53.295 14:48:26 -- scripts/common.sh@335 -- # read -ra ver1 00:07:53.295 14:48:26 -- scripts/common.sh@336 -- # IFS=.-: 00:07:53.295 14:48:26 -- scripts/common.sh@336 -- # read -ra ver2 00:07:53.295 14:48:26 -- scripts/common.sh@337 -- # local 'op=<' 00:07:53.295 14:48:26 -- scripts/common.sh@339 -- # ver1_l=2 00:07:53.295 14:48:26 -- scripts/common.sh@340 -- # ver2_l=1 00:07:53.295 14:48:26 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:07:53.295 14:48:26 -- scripts/common.sh@343 -- # case "$op" in 00:07:53.295 14:48:26 -- scripts/common.sh@344 -- # : 1 00:07:53.295 14:48:26 -- scripts/common.sh@363 -- # (( v = 0 )) 00:07:53.295 14:48:26 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:53.295 14:48:26 -- scripts/common.sh@364 -- # decimal 1 00:07:53.295 14:48:26 -- scripts/common.sh@352 -- # local d=1 00:07:53.295 14:48:26 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:53.295 14:48:26 -- scripts/common.sh@354 -- # echo 1 00:07:53.295 14:48:26 -- scripts/common.sh@364 -- # ver1[v]=1 00:07:53.295 14:48:26 -- scripts/common.sh@365 -- # decimal 2 00:07:53.295 14:48:26 -- scripts/common.sh@352 -- # local d=2 00:07:53.295 14:48:26 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:53.295 14:48:26 -- scripts/common.sh@354 -- # echo 2 00:07:53.295 14:48:26 -- scripts/common.sh@365 -- # ver2[v]=2 00:07:53.295 14:48:26 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:07:53.295 14:48:26 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:07:53.295 14:48:26 -- scripts/common.sh@367 -- # return 0 00:07:53.295 14:48:26 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:53.295 14:48:26 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:07:53.295 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:53.295 --rc genhtml_branch_coverage=1 00:07:53.295 --rc genhtml_function_coverage=1 00:07:53.295 --rc genhtml_legend=1 00:07:53.295 --rc geninfo_all_blocks=1 00:07:53.295 --rc geninfo_unexecuted_blocks=1 00:07:53.295 00:07:53.295 ' 00:07:53.295 14:48:26 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:07:53.295 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:53.295 --rc genhtml_branch_coverage=1 00:07:53.295 --rc genhtml_function_coverage=1 00:07:53.295 --rc genhtml_legend=1 00:07:53.295 --rc geninfo_all_blocks=1 00:07:53.295 --rc geninfo_unexecuted_blocks=1 00:07:53.295 00:07:53.295 ' 00:07:53.295 14:48:26 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:07:53.295 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:53.295 --rc genhtml_branch_coverage=1 00:07:53.295 --rc genhtml_function_coverage=1 00:07:53.295 --rc genhtml_legend=1 00:07:53.295 --rc geninfo_all_blocks=1 00:07:53.295 --rc geninfo_unexecuted_blocks=1 00:07:53.295 00:07:53.295 ' 00:07:53.295 14:48:26 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:07:53.295 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:53.295 --rc genhtml_branch_coverage=1 00:07:53.295 --rc genhtml_function_coverage=1 00:07:53.295 --rc genhtml_legend=1 00:07:53.295 --rc geninfo_all_blocks=1 00:07:53.295 --rc geninfo_unexecuted_blocks=1 00:07:53.295 00:07:53.295 ' 00:07:53.295 14:48:26 -- app/version.sh@17 -- # get_header_version major 00:07:53.295 14:48:26 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:07:53.295 14:48:26 -- app/version.sh@14 -- # cut -f2 00:07:53.295 14:48:26 -- app/version.sh@14 -- # tr -d '"' 00:07:53.295 14:48:26 -- app/version.sh@17 -- # major=24 00:07:53.295 14:48:26 -- app/version.sh@18 -- # get_header_version minor 00:07:53.295 14:48:26 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:07:53.295 14:48:26 -- app/version.sh@14 -- # cut -f2 00:07:53.295 14:48:26 -- app/version.sh@14 -- # tr -d '"' 00:07:53.295 14:48:26 -- app/version.sh@18 -- # minor=1 00:07:53.295 14:48:26 -- app/version.sh@19 -- # get_header_version patch 00:07:53.295 14:48:26 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:07:53.295 14:48:26 -- app/version.sh@14 -- # tr -d '"' 00:07:53.295 14:48:26 -- app/version.sh@14 -- # cut -f2 00:07:53.295 14:48:26 -- app/version.sh@19 -- # patch=1 00:07:53.295 14:48:26 -- app/version.sh@20 -- # get_header_version suffix 00:07:53.295 14:48:26 -- app/version.sh@14 -- # cut -f2 00:07:53.295 14:48:26 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:07:53.295 14:48:26 -- app/version.sh@14 -- # tr -d '"' 00:07:53.295 14:48:26 -- app/version.sh@20 -- # suffix=-pre 00:07:53.295 14:48:26 -- app/version.sh@22 -- # version=24.1 00:07:53.295 14:48:26 -- app/version.sh@25 -- # (( patch != 0 )) 00:07:53.295 14:48:26 -- app/version.sh@25 -- # version=24.1.1 00:07:53.295 14:48:26 -- app/version.sh@28 -- # version=24.1.1rc0 00:07:53.295 14:48:26 -- app/version.sh@30 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:07:53.295 14:48:26 -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:07:53.295 14:48:26 -- app/version.sh@30 -- # py_version=24.1.1rc0 00:07:53.295 14:48:26 -- app/version.sh@31 -- # [[ 24.1.1rc0 == \2\4\.\1\.\1\r\c\0 ]] 00:07:53.295 00:07:53.295 real 0m0.228s 00:07:53.295 user 0m0.155s 00:07:53.295 sys 0m0.112s 00:07:53.295 14:48:26 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:53.295 ************************************ 00:07:53.295 END TEST version 00:07:53.295 ************************************ 00:07:53.295 14:48:26 -- common/autotest_common.sh@10 -- # set +x 00:07:53.295 14:48:26 -- spdk/autotest.sh@181 -- # '[' 0 -eq 1 ']' 00:07:53.295 14:48:26 -- spdk/autotest.sh@191 -- # uname -s 00:07:53.565 14:48:26 -- spdk/autotest.sh@191 -- # [[ Linux == Linux ]] 00:07:53.565 14:48:26 -- spdk/autotest.sh@192 -- # [[ 0 -eq 1 ]] 00:07:53.565 14:48:26 -- spdk/autotest.sh@192 -- # [[ 0 -eq 1 ]] 00:07:53.565 14:48:26 -- spdk/autotest.sh@204 -- # '[' 0 -eq 1 ']' 00:07:53.565 14:48:26 -- spdk/autotest.sh@251 -- # '[' 0 -eq 1 ']' 00:07:53.565 14:48:26 -- spdk/autotest.sh@255 -- # timing_exit lib 00:07:53.565 14:48:26 -- common/autotest_common.sh@728 -- # xtrace_disable 00:07:53.565 14:48:26 -- common/autotest_common.sh@10 -- # set +x 00:07:53.565 14:48:26 -- spdk/autotest.sh@257 -- # '[' 0 -eq 1 ']' 00:07:53.565 14:48:26 -- spdk/autotest.sh@265 -- # '[' 0 -eq 1 ']' 00:07:53.565 14:48:26 -- spdk/autotest.sh@274 -- # '[' 1 -eq 1 ']' 00:07:53.565 14:48:26 -- spdk/autotest.sh@275 -- # export NET_TYPE 00:07:53.565 14:48:26 -- spdk/autotest.sh@278 -- # '[' tcp = rdma ']' 00:07:53.565 14:48:26 -- spdk/autotest.sh@281 -- # '[' tcp = tcp ']' 00:07:53.565 14:48:26 -- spdk/autotest.sh@282 -- # run_test nvmf_tcp /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf.sh --transport=tcp 00:07:53.565 14:48:26 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:07:53.565 14:48:26 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:53.565 14:48:26 -- common/autotest_common.sh@10 -- # set +x 00:07:53.565 ************************************ 00:07:53.565 START TEST nvmf_tcp 00:07:53.565 ************************************ 00:07:53.565 14:48:26 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf.sh --transport=tcp 00:07:53.565 * Looking for test storage... 00:07:53.565 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf 00:07:53.565 14:48:26 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:07:53.565 14:48:26 -- common/autotest_common.sh@1690 -- # lcov --version 00:07:53.565 14:48:26 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:07:53.565 14:48:26 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:07:53.565 14:48:26 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:07:53.565 14:48:26 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:07:53.565 14:48:26 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:07:53.565 14:48:26 -- scripts/common.sh@335 -- # IFS=.-: 00:07:53.565 14:48:26 -- scripts/common.sh@335 -- # read -ra ver1 00:07:53.565 14:48:26 -- scripts/common.sh@336 -- # IFS=.-: 00:07:53.565 14:48:26 -- scripts/common.sh@336 -- # read -ra ver2 00:07:53.565 14:48:26 -- scripts/common.sh@337 -- # local 'op=<' 00:07:53.565 14:48:26 -- scripts/common.sh@339 -- # ver1_l=2 00:07:53.565 14:48:26 -- scripts/common.sh@340 -- # ver2_l=1 00:07:53.565 14:48:26 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:07:53.565 14:48:26 -- scripts/common.sh@343 -- # case "$op" in 00:07:53.565 14:48:26 -- scripts/common.sh@344 -- # : 1 00:07:53.565 14:48:26 -- scripts/common.sh@363 -- # (( v = 0 )) 00:07:53.565 14:48:26 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:53.565 14:48:26 -- scripts/common.sh@364 -- # decimal 1 00:07:53.565 14:48:26 -- scripts/common.sh@352 -- # local d=1 00:07:53.565 14:48:26 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:53.565 14:48:26 -- scripts/common.sh@354 -- # echo 1 00:07:53.565 14:48:26 -- scripts/common.sh@364 -- # ver1[v]=1 00:07:53.565 14:48:26 -- scripts/common.sh@365 -- # decimal 2 00:07:53.565 14:48:26 -- scripts/common.sh@352 -- # local d=2 00:07:53.565 14:48:26 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:53.565 14:48:26 -- scripts/common.sh@354 -- # echo 2 00:07:53.565 14:48:26 -- scripts/common.sh@365 -- # ver2[v]=2 00:07:53.565 14:48:26 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:07:53.565 14:48:26 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:07:53.565 14:48:26 -- scripts/common.sh@367 -- # return 0 00:07:53.565 14:48:26 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:53.565 14:48:26 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:07:53.565 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:53.565 --rc genhtml_branch_coverage=1 00:07:53.565 --rc genhtml_function_coverage=1 00:07:53.565 --rc genhtml_legend=1 00:07:53.565 --rc geninfo_all_blocks=1 00:07:53.565 --rc geninfo_unexecuted_blocks=1 00:07:53.565 00:07:53.565 ' 00:07:53.565 14:48:26 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:07:53.565 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:53.565 --rc genhtml_branch_coverage=1 00:07:53.565 --rc genhtml_function_coverage=1 00:07:53.565 --rc genhtml_legend=1 00:07:53.565 --rc geninfo_all_blocks=1 00:07:53.565 --rc geninfo_unexecuted_blocks=1 00:07:53.565 00:07:53.565 ' 00:07:53.565 14:48:26 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:07:53.565 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:53.565 --rc genhtml_branch_coverage=1 00:07:53.565 --rc genhtml_function_coverage=1 00:07:53.565 --rc genhtml_legend=1 00:07:53.565 --rc geninfo_all_blocks=1 00:07:53.565 --rc geninfo_unexecuted_blocks=1 00:07:53.565 00:07:53.565 ' 00:07:53.565 14:48:26 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:07:53.565 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:53.565 --rc genhtml_branch_coverage=1 00:07:53.565 --rc genhtml_function_coverage=1 00:07:53.565 --rc genhtml_legend=1 00:07:53.565 --rc geninfo_all_blocks=1 00:07:53.565 --rc geninfo_unexecuted_blocks=1 00:07:53.565 00:07:53.565 ' 00:07:53.565 14:48:26 -- nvmf/nvmf.sh@10 -- # uname -s 00:07:53.565 14:48:26 -- nvmf/nvmf.sh@10 -- # '[' '!' Linux = Linux ']' 00:07:53.565 14:48:26 -- nvmf/nvmf.sh@14 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:07:53.565 14:48:26 -- nvmf/common.sh@7 -- # uname -s 00:07:53.565 14:48:26 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:53.565 14:48:26 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:53.565 14:48:26 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:53.565 14:48:26 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:53.565 14:48:26 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:53.565 14:48:26 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:53.565 14:48:26 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:53.565 14:48:26 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:53.565 14:48:26 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:53.565 14:48:26 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:53.565 14:48:26 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:2d843004-a791-47f3-8dd7-3d04462c368b 00:07:53.565 14:48:26 -- nvmf/common.sh@18 -- # NVME_HOSTID=2d843004-a791-47f3-8dd7-3d04462c368b 00:07:53.565 14:48:26 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:53.565 14:48:26 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:53.565 14:48:26 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:07:53.565 14:48:26 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:07:53.565 14:48:26 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:53.565 14:48:26 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:53.565 14:48:26 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:53.565 14:48:26 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:53.565 14:48:26 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:53.565 14:48:26 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:53.565 14:48:26 -- paths/export.sh@5 -- # export PATH 00:07:53.565 14:48:26 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:53.565 14:48:26 -- nvmf/common.sh@46 -- # : 0 00:07:53.565 14:48:26 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:07:53.565 14:48:26 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:07:53.565 14:48:26 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:07:53.565 14:48:26 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:53.565 14:48:26 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:53.565 14:48:26 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:07:53.565 14:48:26 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:07:53.565 14:48:26 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:07:53.565 14:48:26 -- nvmf/nvmf.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:07:53.565 14:48:26 -- nvmf/nvmf.sh@18 -- # TEST_ARGS=("$@") 00:07:53.565 14:48:26 -- nvmf/nvmf.sh@20 -- # timing_enter target 00:07:53.565 14:48:26 -- common/autotest_common.sh@722 -- # xtrace_disable 00:07:53.565 14:48:26 -- common/autotest_common.sh@10 -- # set +x 00:07:53.565 14:48:26 -- nvmf/nvmf.sh@22 -- # [[ 0 -eq 0 ]] 00:07:53.565 14:48:26 -- nvmf/nvmf.sh@23 -- # run_test nvmf_example /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:07:53.565 14:48:26 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:07:53.566 14:48:26 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:53.566 14:48:26 -- common/autotest_common.sh@10 -- # set +x 00:07:53.829 ************************************ 00:07:53.829 START TEST nvmf_example 00:07:53.829 ************************************ 00:07:53.829 14:48:26 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:07:53.829 * Looking for test storage... 00:07:53.829 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:07:53.829 14:48:26 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:07:53.829 14:48:26 -- common/autotest_common.sh@1690 -- # lcov --version 00:07:53.829 14:48:26 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:07:53.829 14:48:26 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:07:53.829 14:48:26 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:07:53.829 14:48:26 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:07:53.829 14:48:26 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:07:53.829 14:48:26 -- scripts/common.sh@335 -- # IFS=.-: 00:07:53.829 14:48:26 -- scripts/common.sh@335 -- # read -ra ver1 00:07:53.829 14:48:26 -- scripts/common.sh@336 -- # IFS=.-: 00:07:53.829 14:48:26 -- scripts/common.sh@336 -- # read -ra ver2 00:07:53.829 14:48:26 -- scripts/common.sh@337 -- # local 'op=<' 00:07:53.829 14:48:26 -- scripts/common.sh@339 -- # ver1_l=2 00:07:53.829 14:48:26 -- scripts/common.sh@340 -- # ver2_l=1 00:07:53.829 14:48:26 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:07:53.829 14:48:26 -- scripts/common.sh@343 -- # case "$op" in 00:07:53.829 14:48:26 -- scripts/common.sh@344 -- # : 1 00:07:53.829 14:48:26 -- scripts/common.sh@363 -- # (( v = 0 )) 00:07:53.829 14:48:26 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:53.829 14:48:26 -- scripts/common.sh@364 -- # decimal 1 00:07:53.829 14:48:26 -- scripts/common.sh@352 -- # local d=1 00:07:53.829 14:48:26 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:53.829 14:48:26 -- scripts/common.sh@354 -- # echo 1 00:07:53.829 14:48:26 -- scripts/common.sh@364 -- # ver1[v]=1 00:07:53.829 14:48:26 -- scripts/common.sh@365 -- # decimal 2 00:07:53.829 14:48:26 -- scripts/common.sh@352 -- # local d=2 00:07:53.829 14:48:26 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:53.829 14:48:26 -- scripts/common.sh@354 -- # echo 2 00:07:53.829 14:48:26 -- scripts/common.sh@365 -- # ver2[v]=2 00:07:53.829 14:48:26 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:07:53.829 14:48:26 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:07:53.829 14:48:26 -- scripts/common.sh@367 -- # return 0 00:07:53.829 14:48:26 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:53.829 14:48:26 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:07:53.829 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:53.829 --rc genhtml_branch_coverage=1 00:07:53.829 --rc genhtml_function_coverage=1 00:07:53.829 --rc genhtml_legend=1 00:07:53.829 --rc geninfo_all_blocks=1 00:07:53.829 --rc geninfo_unexecuted_blocks=1 00:07:53.829 00:07:53.829 ' 00:07:53.829 14:48:26 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:07:53.829 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:53.829 --rc genhtml_branch_coverage=1 00:07:53.829 --rc genhtml_function_coverage=1 00:07:53.829 --rc genhtml_legend=1 00:07:53.829 --rc geninfo_all_blocks=1 00:07:53.829 --rc geninfo_unexecuted_blocks=1 00:07:53.829 00:07:53.829 ' 00:07:53.829 14:48:26 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:07:53.829 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:53.829 --rc genhtml_branch_coverage=1 00:07:53.829 --rc genhtml_function_coverage=1 00:07:53.829 --rc genhtml_legend=1 00:07:53.829 --rc geninfo_all_blocks=1 00:07:53.829 --rc geninfo_unexecuted_blocks=1 00:07:53.829 00:07:53.829 ' 00:07:53.829 14:48:26 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:07:53.829 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:53.829 --rc genhtml_branch_coverage=1 00:07:53.829 --rc genhtml_function_coverage=1 00:07:53.829 --rc genhtml_legend=1 00:07:53.829 --rc geninfo_all_blocks=1 00:07:53.829 --rc geninfo_unexecuted_blocks=1 00:07:53.829 00:07:53.829 ' 00:07:53.829 14:48:26 -- target/nvmf_example.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:07:53.829 14:48:26 -- nvmf/common.sh@7 -- # uname -s 00:07:53.829 14:48:26 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:53.829 14:48:26 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:53.829 14:48:26 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:53.829 14:48:26 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:53.829 14:48:26 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:53.829 14:48:26 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:53.829 14:48:26 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:53.829 14:48:26 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:53.829 14:48:26 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:53.829 14:48:26 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:53.829 14:48:26 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:2d843004-a791-47f3-8dd7-3d04462c368b 00:07:53.829 14:48:26 -- nvmf/common.sh@18 -- # NVME_HOSTID=2d843004-a791-47f3-8dd7-3d04462c368b 00:07:53.829 14:48:26 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:53.829 14:48:26 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:53.829 14:48:26 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:07:53.829 14:48:26 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:07:53.829 14:48:26 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:53.829 14:48:26 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:53.829 14:48:26 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:53.829 14:48:26 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:53.829 14:48:26 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:53.829 14:48:26 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:53.829 14:48:26 -- paths/export.sh@5 -- # export PATH 00:07:53.830 14:48:26 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:53.830 14:48:26 -- nvmf/common.sh@46 -- # : 0 00:07:53.830 14:48:26 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:07:53.830 14:48:26 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:07:53.830 14:48:26 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:07:53.830 14:48:26 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:53.830 14:48:26 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:53.830 14:48:26 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:07:53.830 14:48:26 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:07:53.830 14:48:26 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:07:53.830 14:48:26 -- target/nvmf_example.sh@11 -- # NVMF_EXAMPLE=("$SPDK_EXAMPLE_DIR/nvmf") 00:07:53.830 14:48:26 -- target/nvmf_example.sh@13 -- # MALLOC_BDEV_SIZE=64 00:07:53.830 14:48:26 -- target/nvmf_example.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:07:53.830 14:48:26 -- target/nvmf_example.sh@24 -- # build_nvmf_example_args 00:07:53.830 14:48:26 -- target/nvmf_example.sh@17 -- # '[' 0 -eq 1 ']' 00:07:53.830 14:48:26 -- target/nvmf_example.sh@20 -- # NVMF_EXAMPLE+=(-i "$NVMF_APP_SHM_ID" -g 10000) 00:07:53.830 14:48:26 -- target/nvmf_example.sh@21 -- # NVMF_EXAMPLE+=("${NO_HUGE[@]}") 00:07:53.830 14:48:26 -- target/nvmf_example.sh@40 -- # timing_enter nvmf_example_test 00:07:53.830 14:48:26 -- common/autotest_common.sh@722 -- # xtrace_disable 00:07:53.830 14:48:26 -- common/autotest_common.sh@10 -- # set +x 00:07:53.830 14:48:26 -- target/nvmf_example.sh@41 -- # nvmftestinit 00:07:53.830 14:48:26 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:07:53.830 14:48:26 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:53.830 14:48:26 -- nvmf/common.sh@436 -- # prepare_net_devs 00:07:53.830 14:48:26 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:07:53.830 14:48:26 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:07:53.830 14:48:26 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:53.830 14:48:26 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:53.830 14:48:26 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:53.830 14:48:26 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:07:53.830 14:48:26 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:07:53.830 14:48:26 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:07:53.830 14:48:26 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:07:53.830 14:48:26 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:07:53.830 14:48:26 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:07:53.830 14:48:26 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:53.830 14:48:26 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:53.830 14:48:26 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:07:53.830 14:48:26 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:07:53.830 14:48:26 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:07:53.830 14:48:26 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:07:53.830 14:48:26 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:07:53.830 14:48:26 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:53.830 14:48:26 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:07:53.830 14:48:26 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:07:53.830 14:48:26 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:07:53.830 14:48:26 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:07:53.830 14:48:26 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:07:53.830 Cannot find device "nvmf_init_br" 00:07:53.830 14:48:26 -- nvmf/common.sh@153 -- # true 00:07:53.830 14:48:26 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:07:53.830 Cannot find device "nvmf_tgt_br" 00:07:53.830 14:48:26 -- nvmf/common.sh@154 -- # true 00:07:53.830 14:48:26 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:07:54.089 Cannot find device "nvmf_tgt_br2" 00:07:54.089 14:48:26 -- nvmf/common.sh@155 -- # true 00:07:54.089 14:48:26 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:07:54.089 Cannot find device "nvmf_init_br" 00:07:54.089 14:48:26 -- nvmf/common.sh@156 -- # true 00:07:54.089 14:48:26 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:07:54.089 Cannot find device "nvmf_tgt_br" 00:07:54.089 14:48:26 -- nvmf/common.sh@157 -- # true 00:07:54.089 14:48:26 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:07:54.089 Cannot find device "nvmf_tgt_br2" 00:07:54.089 14:48:26 -- nvmf/common.sh@158 -- # true 00:07:54.089 14:48:26 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:07:54.089 Cannot find device "nvmf_br" 00:07:54.089 14:48:26 -- nvmf/common.sh@159 -- # true 00:07:54.089 14:48:26 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:07:54.089 Cannot find device "nvmf_init_if" 00:07:54.089 14:48:27 -- nvmf/common.sh@160 -- # true 00:07:54.089 14:48:27 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:07:54.089 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:07:54.089 14:48:27 -- nvmf/common.sh@161 -- # true 00:07:54.089 14:48:27 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:07:54.089 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:07:54.089 14:48:27 -- nvmf/common.sh@162 -- # true 00:07:54.089 14:48:27 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:07:54.089 14:48:27 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:07:54.089 14:48:27 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:07:54.089 14:48:27 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:07:54.089 14:48:27 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:07:54.089 14:48:27 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:07:54.089 14:48:27 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:07:54.089 14:48:27 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:07:54.089 14:48:27 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:07:54.089 14:48:27 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:07:54.089 14:48:27 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:07:54.089 14:48:27 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:07:54.089 14:48:27 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:07:54.089 14:48:27 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:07:54.089 14:48:27 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:07:54.089 14:48:27 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:07:54.089 14:48:27 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:07:54.089 14:48:27 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:07:54.089 14:48:27 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:07:54.348 14:48:27 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:07:54.348 14:48:27 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:07:54.348 14:48:27 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:07:54.348 14:48:27 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:07:54.348 14:48:27 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:07:54.348 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:54.348 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.088 ms 00:07:54.348 00:07:54.348 --- 10.0.0.2 ping statistics --- 00:07:54.348 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:54.348 rtt min/avg/max/mdev = 0.088/0.088/0.088/0.000 ms 00:07:54.348 14:48:27 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:07:54.348 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:07:54.348 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.071 ms 00:07:54.348 00:07:54.348 --- 10.0.0.3 ping statistics --- 00:07:54.348 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:54.348 rtt min/avg/max/mdev = 0.071/0.071/0.071/0.000 ms 00:07:54.348 14:48:27 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:07:54.348 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:54.348 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.042 ms 00:07:54.348 00:07:54.348 --- 10.0.0.1 ping statistics --- 00:07:54.348 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:54.348 rtt min/avg/max/mdev = 0.042/0.042/0.042/0.000 ms 00:07:54.348 14:48:27 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:54.348 14:48:27 -- nvmf/common.sh@421 -- # return 0 00:07:54.348 14:48:27 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:07:54.348 14:48:27 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:54.348 14:48:27 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:07:54.348 14:48:27 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:07:54.348 14:48:27 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:54.348 14:48:27 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:07:54.348 14:48:27 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:07:54.348 14:48:27 -- target/nvmf_example.sh@42 -- # nvmfexamplestart '-m 0xF' 00:07:54.348 14:48:27 -- target/nvmf_example.sh@27 -- # timing_enter start_nvmf_example 00:07:54.348 14:48:27 -- common/autotest_common.sh@722 -- # xtrace_disable 00:07:54.348 14:48:27 -- common/autotest_common.sh@10 -- # set +x 00:07:54.348 14:48:27 -- target/nvmf_example.sh@29 -- # '[' tcp == tcp ']' 00:07:54.348 14:48:27 -- target/nvmf_example.sh@30 -- # NVMF_EXAMPLE=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_EXAMPLE[@]}") 00:07:54.348 14:48:27 -- target/nvmf_example.sh@34 -- # nvmfpid=72131 00:07:54.348 14:48:27 -- target/nvmf_example.sh@33 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/examples/nvmf -i 0 -g 10000 -m 0xF 00:07:54.348 14:48:27 -- target/nvmf_example.sh@35 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:07:54.348 14:48:27 -- target/nvmf_example.sh@36 -- # waitforlisten 72131 00:07:54.348 14:48:27 -- common/autotest_common.sh@829 -- # '[' -z 72131 ']' 00:07:54.348 14:48:27 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:54.348 14:48:27 -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:54.348 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:54.348 14:48:27 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:54.348 14:48:27 -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:54.348 14:48:27 -- common/autotest_common.sh@10 -- # set +x 00:07:55.286 14:48:28 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:55.286 14:48:28 -- common/autotest_common.sh@862 -- # return 0 00:07:55.286 14:48:28 -- target/nvmf_example.sh@37 -- # timing_exit start_nvmf_example 00:07:55.286 14:48:28 -- common/autotest_common.sh@728 -- # xtrace_disable 00:07:55.286 14:48:28 -- common/autotest_common.sh@10 -- # set +x 00:07:55.546 14:48:28 -- target/nvmf_example.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:07:55.546 14:48:28 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:55.546 14:48:28 -- common/autotest_common.sh@10 -- # set +x 00:07:55.546 14:48:28 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:55.546 14:48:28 -- target/nvmf_example.sh@47 -- # rpc_cmd bdev_malloc_create 64 512 00:07:55.546 14:48:28 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:55.546 14:48:28 -- common/autotest_common.sh@10 -- # set +x 00:07:55.546 14:48:28 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:55.546 14:48:28 -- target/nvmf_example.sh@47 -- # malloc_bdevs='Malloc0 ' 00:07:55.546 14:48:28 -- target/nvmf_example.sh@49 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:07:55.546 14:48:28 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:55.546 14:48:28 -- common/autotest_common.sh@10 -- # set +x 00:07:55.546 14:48:28 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:55.546 14:48:28 -- target/nvmf_example.sh@52 -- # for malloc_bdev in $malloc_bdevs 00:07:55.546 14:48:28 -- target/nvmf_example.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:07:55.546 14:48:28 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:55.546 14:48:28 -- common/autotest_common.sh@10 -- # set +x 00:07:55.546 14:48:28 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:55.546 14:48:28 -- target/nvmf_example.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:55.546 14:48:28 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:55.546 14:48:28 -- common/autotest_common.sh@10 -- # set +x 00:07:55.546 14:48:28 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:55.546 14:48:28 -- target/nvmf_example.sh@59 -- # perf=/home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf 00:07:55.546 14:48:28 -- target/nvmf_example.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:08:07.752 Initializing NVMe Controllers 00:08:07.752 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:08:07.752 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:08:07.752 Initialization complete. Launching workers. 00:08:07.752 ======================================================== 00:08:07.752 Latency(us) 00:08:07.752 Device Information : IOPS MiB/s Average min max 00:08:07.752 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 16997.57 66.40 3764.96 559.57 21115.02 00:08:07.752 ======================================================== 00:08:07.752 Total : 16997.57 66.40 3764.96 559.57 21115.02 00:08:07.752 00:08:07.752 14:48:38 -- target/nvmf_example.sh@65 -- # trap - SIGINT SIGTERM EXIT 00:08:07.752 14:48:38 -- target/nvmf_example.sh@66 -- # nvmftestfini 00:08:07.752 14:48:38 -- nvmf/common.sh@476 -- # nvmfcleanup 00:08:07.752 14:48:38 -- nvmf/common.sh@116 -- # sync 00:08:07.752 14:48:38 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:08:07.752 14:48:38 -- nvmf/common.sh@119 -- # set +e 00:08:07.752 14:48:38 -- nvmf/common.sh@120 -- # for i in {1..20} 00:08:07.752 14:48:38 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:08:07.752 rmmod nvme_tcp 00:08:07.752 rmmod nvme_fabrics 00:08:07.752 rmmod nvme_keyring 00:08:07.752 14:48:38 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:08:07.752 14:48:38 -- nvmf/common.sh@123 -- # set -e 00:08:07.752 14:48:38 -- nvmf/common.sh@124 -- # return 0 00:08:07.752 14:48:38 -- nvmf/common.sh@477 -- # '[' -n 72131 ']' 00:08:07.752 14:48:38 -- nvmf/common.sh@478 -- # killprocess 72131 00:08:07.752 14:48:38 -- common/autotest_common.sh@936 -- # '[' -z 72131 ']' 00:08:07.752 14:48:38 -- common/autotest_common.sh@940 -- # kill -0 72131 00:08:07.752 14:48:38 -- common/autotest_common.sh@941 -- # uname 00:08:07.752 14:48:38 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:08:07.752 14:48:38 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 72131 00:08:07.752 14:48:38 -- common/autotest_common.sh@942 -- # process_name=nvmf 00:08:07.752 14:48:38 -- common/autotest_common.sh@946 -- # '[' nvmf = sudo ']' 00:08:07.752 killing process with pid 72131 00:08:07.752 14:48:38 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 72131' 00:08:07.752 14:48:38 -- common/autotest_common.sh@955 -- # kill 72131 00:08:07.752 14:48:38 -- common/autotest_common.sh@960 -- # wait 72131 00:08:07.752 nvmf threads initialize successfully 00:08:07.752 bdev subsystem init successfully 00:08:07.752 created a nvmf target service 00:08:07.752 create targets's poll groups done 00:08:07.752 all subsystems of target started 00:08:07.752 nvmf target is running 00:08:07.752 all subsystems of target stopped 00:08:07.752 destroy targets's poll groups done 00:08:07.752 destroyed the nvmf target service 00:08:07.752 bdev subsystem finish successfully 00:08:07.752 nvmf threads destroy successfully 00:08:07.752 14:48:39 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:08:07.752 14:48:39 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:08:07.752 14:48:39 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:08:07.752 14:48:39 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:07.752 14:48:39 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:08:07.752 14:48:39 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:07.752 14:48:39 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:07.752 14:48:39 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:07.752 14:48:39 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:08:07.752 14:48:39 -- target/nvmf_example.sh@67 -- # timing_exit nvmf_example_test 00:08:07.752 14:48:39 -- common/autotest_common.sh@728 -- # xtrace_disable 00:08:07.752 14:48:39 -- common/autotest_common.sh@10 -- # set +x 00:08:07.752 00:08:07.752 real 0m12.444s 00:08:07.752 user 0m44.558s 00:08:07.752 sys 0m1.998s 00:08:07.752 14:48:39 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:07.752 ************************************ 00:08:07.752 END TEST nvmf_example 00:08:07.752 14:48:39 -- common/autotest_common.sh@10 -- # set +x 00:08:07.752 ************************************ 00:08:07.752 14:48:39 -- nvmf/nvmf.sh@24 -- # run_test nvmf_filesystem /home/vagrant/spdk_repo/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:08:07.752 14:48:39 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:08:07.752 14:48:39 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:07.752 14:48:39 -- common/autotest_common.sh@10 -- # set +x 00:08:07.752 ************************************ 00:08:07.752 START TEST nvmf_filesystem 00:08:07.752 ************************************ 00:08:07.752 14:48:39 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:08:07.752 * Looking for test storage... 00:08:07.752 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:08:07.752 14:48:39 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:08:07.752 14:48:39 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:08:07.752 14:48:39 -- common/autotest_common.sh@1690 -- # lcov --version 00:08:07.752 14:48:39 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:08:07.753 14:48:39 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:08:07.753 14:48:39 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:08:07.753 14:48:39 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:08:07.753 14:48:39 -- scripts/common.sh@335 -- # IFS=.-: 00:08:07.753 14:48:39 -- scripts/common.sh@335 -- # read -ra ver1 00:08:07.753 14:48:39 -- scripts/common.sh@336 -- # IFS=.-: 00:08:07.753 14:48:39 -- scripts/common.sh@336 -- # read -ra ver2 00:08:07.753 14:48:39 -- scripts/common.sh@337 -- # local 'op=<' 00:08:07.753 14:48:39 -- scripts/common.sh@339 -- # ver1_l=2 00:08:07.753 14:48:39 -- scripts/common.sh@340 -- # ver2_l=1 00:08:07.753 14:48:39 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:08:07.753 14:48:39 -- scripts/common.sh@343 -- # case "$op" in 00:08:07.753 14:48:39 -- scripts/common.sh@344 -- # : 1 00:08:07.753 14:48:39 -- scripts/common.sh@363 -- # (( v = 0 )) 00:08:07.753 14:48:39 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:07.753 14:48:39 -- scripts/common.sh@364 -- # decimal 1 00:08:07.753 14:48:39 -- scripts/common.sh@352 -- # local d=1 00:08:07.753 14:48:39 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:07.753 14:48:39 -- scripts/common.sh@354 -- # echo 1 00:08:07.753 14:48:39 -- scripts/common.sh@364 -- # ver1[v]=1 00:08:07.753 14:48:39 -- scripts/common.sh@365 -- # decimal 2 00:08:07.753 14:48:39 -- scripts/common.sh@352 -- # local d=2 00:08:07.753 14:48:39 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:07.753 14:48:39 -- scripts/common.sh@354 -- # echo 2 00:08:07.753 14:48:39 -- scripts/common.sh@365 -- # ver2[v]=2 00:08:07.753 14:48:39 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:08:07.753 14:48:39 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:08:07.753 14:48:39 -- scripts/common.sh@367 -- # return 0 00:08:07.753 14:48:39 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:07.753 14:48:39 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:08:07.753 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:07.753 --rc genhtml_branch_coverage=1 00:08:07.753 --rc genhtml_function_coverage=1 00:08:07.753 --rc genhtml_legend=1 00:08:07.753 --rc geninfo_all_blocks=1 00:08:07.753 --rc geninfo_unexecuted_blocks=1 00:08:07.753 00:08:07.753 ' 00:08:07.753 14:48:39 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:08:07.753 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:07.753 --rc genhtml_branch_coverage=1 00:08:07.753 --rc genhtml_function_coverage=1 00:08:07.753 --rc genhtml_legend=1 00:08:07.753 --rc geninfo_all_blocks=1 00:08:07.753 --rc geninfo_unexecuted_blocks=1 00:08:07.753 00:08:07.753 ' 00:08:07.753 14:48:39 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:08:07.753 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:07.753 --rc genhtml_branch_coverage=1 00:08:07.753 --rc genhtml_function_coverage=1 00:08:07.753 --rc genhtml_legend=1 00:08:07.753 --rc geninfo_all_blocks=1 00:08:07.753 --rc geninfo_unexecuted_blocks=1 00:08:07.753 00:08:07.753 ' 00:08:07.753 14:48:39 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:08:07.753 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:07.753 --rc genhtml_branch_coverage=1 00:08:07.753 --rc genhtml_function_coverage=1 00:08:07.753 --rc genhtml_legend=1 00:08:07.753 --rc geninfo_all_blocks=1 00:08:07.753 --rc geninfo_unexecuted_blocks=1 00:08:07.753 00:08:07.753 ' 00:08:07.753 14:48:39 -- target/filesystem.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh 00:08:07.753 14:48:39 -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:08:07.753 14:48:39 -- common/autotest_common.sh@34 -- # set -e 00:08:07.753 14:48:39 -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:08:07.753 14:48:39 -- common/autotest_common.sh@36 -- # shopt -s extglob 00:08:07.753 14:48:39 -- common/autotest_common.sh@38 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/common/build_config.sh ]] 00:08:07.753 14:48:39 -- common/autotest_common.sh@39 -- # source /home/vagrant/spdk_repo/spdk/test/common/build_config.sh 00:08:07.753 14:48:39 -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:08:07.753 14:48:39 -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:08:07.753 14:48:39 -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:08:07.753 14:48:39 -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:08:07.753 14:48:39 -- common/build_config.sh@5 -- # CONFIG_USDT=y 00:08:07.753 14:48:39 -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:08:07.753 14:48:39 -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:08:07.753 14:48:39 -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:08:07.753 14:48:39 -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:08:07.753 14:48:39 -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:08:07.753 14:48:39 -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:08:07.753 14:48:39 -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:08:07.753 14:48:39 -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:08:07.753 14:48:39 -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:08:07.753 14:48:39 -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:08:07.753 14:48:39 -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:08:07.753 14:48:39 -- common/build_config.sh@17 -- # CONFIG_PGO_CAPTURE=n 00:08:07.753 14:48:39 -- common/build_config.sh@18 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:08:07.753 14:48:39 -- common/build_config.sh@19 -- # CONFIG_ENV=/home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:08:07.753 14:48:39 -- common/build_config.sh@20 -- # CONFIG_LTO=n 00:08:07.753 14:48:39 -- common/build_config.sh@21 -- # CONFIG_ISCSI_INITIATOR=y 00:08:07.753 14:48:39 -- common/build_config.sh@22 -- # CONFIG_CET=n 00:08:07.753 14:48:39 -- common/build_config.sh@23 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:08:07.753 14:48:39 -- common/build_config.sh@24 -- # CONFIG_OCF_PATH= 00:08:07.753 14:48:39 -- common/build_config.sh@25 -- # CONFIG_RDMA_SET_TOS=y 00:08:07.753 14:48:39 -- common/build_config.sh@26 -- # CONFIG_HAVE_ARC4RANDOM=y 00:08:07.753 14:48:39 -- common/build_config.sh@27 -- # CONFIG_HAVE_LIBARCHIVE=n 00:08:07.753 14:48:39 -- common/build_config.sh@28 -- # CONFIG_UBLK=y 00:08:07.753 14:48:39 -- common/build_config.sh@29 -- # CONFIG_ISAL_CRYPTO=y 00:08:07.753 14:48:39 -- common/build_config.sh@30 -- # CONFIG_OPENSSL_PATH= 00:08:07.753 14:48:39 -- common/build_config.sh@31 -- # CONFIG_OCF=n 00:08:07.753 14:48:39 -- common/build_config.sh@32 -- # CONFIG_FUSE=n 00:08:07.753 14:48:39 -- common/build_config.sh@33 -- # CONFIG_VTUNE_DIR= 00:08:07.753 14:48:39 -- common/build_config.sh@34 -- # CONFIG_FUZZER_LIB= 00:08:07.753 14:48:39 -- common/build_config.sh@35 -- # CONFIG_FUZZER=n 00:08:07.753 14:48:39 -- common/build_config.sh@36 -- # CONFIG_DPDK_DIR=/home/vagrant/spdk_repo/dpdk/build 00:08:07.753 14:48:39 -- common/build_config.sh@37 -- # CONFIG_CRYPTO=n 00:08:07.753 14:48:39 -- common/build_config.sh@38 -- # CONFIG_PGO_USE=n 00:08:07.753 14:48:39 -- common/build_config.sh@39 -- # CONFIG_VHOST=y 00:08:07.753 14:48:39 -- common/build_config.sh@40 -- # CONFIG_DAOS=n 00:08:07.753 14:48:39 -- common/build_config.sh@41 -- # CONFIG_DPDK_INC_DIR=//home/vagrant/spdk_repo/dpdk/build/include 00:08:07.753 14:48:39 -- common/build_config.sh@42 -- # CONFIG_DAOS_DIR= 00:08:07.753 14:48:39 -- common/build_config.sh@43 -- # CONFIG_UNIT_TESTS=n 00:08:07.753 14:48:39 -- common/build_config.sh@44 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:08:07.753 14:48:39 -- common/build_config.sh@45 -- # CONFIG_VIRTIO=y 00:08:07.753 14:48:39 -- common/build_config.sh@46 -- # CONFIG_COVERAGE=y 00:08:07.753 14:48:39 -- common/build_config.sh@47 -- # CONFIG_RDMA=y 00:08:07.753 14:48:39 -- common/build_config.sh@48 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:08:07.753 14:48:39 -- common/build_config.sh@49 -- # CONFIG_URING_PATH= 00:08:07.753 14:48:39 -- common/build_config.sh@50 -- # CONFIG_XNVME=n 00:08:07.753 14:48:39 -- common/build_config.sh@51 -- # CONFIG_VFIO_USER=n 00:08:07.753 14:48:39 -- common/build_config.sh@52 -- # CONFIG_ARCH=native 00:08:07.753 14:48:39 -- common/build_config.sh@53 -- # CONFIG_URING_ZNS=n 00:08:07.753 14:48:39 -- common/build_config.sh@54 -- # CONFIG_WERROR=y 00:08:07.753 14:48:39 -- common/build_config.sh@55 -- # CONFIG_HAVE_LIBBSD=n 00:08:07.753 14:48:39 -- common/build_config.sh@56 -- # CONFIG_UBSAN=y 00:08:07.753 14:48:39 -- common/build_config.sh@57 -- # CONFIG_IPSEC_MB_DIR= 00:08:07.753 14:48:39 -- common/build_config.sh@58 -- # CONFIG_GOLANG=y 00:08:07.753 14:48:39 -- common/build_config.sh@59 -- # CONFIG_ISAL=y 00:08:07.753 14:48:39 -- common/build_config.sh@60 -- # CONFIG_IDXD_KERNEL=y 00:08:07.753 14:48:39 -- common/build_config.sh@61 -- # CONFIG_DPDK_LIB_DIR=/home/vagrant/spdk_repo/dpdk/build/lib 00:08:07.753 14:48:39 -- common/build_config.sh@62 -- # CONFIG_RDMA_PROV=verbs 00:08:07.753 14:48:39 -- common/build_config.sh@63 -- # CONFIG_APPS=y 00:08:07.753 14:48:39 -- common/build_config.sh@64 -- # CONFIG_SHARED=y 00:08:07.753 14:48:39 -- common/build_config.sh@65 -- # CONFIG_FC_PATH= 00:08:07.753 14:48:39 -- common/build_config.sh@66 -- # CONFIG_DPDK_PKG_CONFIG=n 00:08:07.753 14:48:39 -- common/build_config.sh@67 -- # CONFIG_FC=n 00:08:07.753 14:48:39 -- common/build_config.sh@68 -- # CONFIG_AVAHI=y 00:08:07.753 14:48:39 -- common/build_config.sh@69 -- # CONFIG_FIO_PLUGIN=y 00:08:07.753 14:48:39 -- common/build_config.sh@70 -- # CONFIG_RAID5F=n 00:08:07.753 14:48:39 -- common/build_config.sh@71 -- # CONFIG_EXAMPLES=y 00:08:07.753 14:48:39 -- common/build_config.sh@72 -- # CONFIG_TESTS=y 00:08:07.753 14:48:39 -- common/build_config.sh@73 -- # CONFIG_CRYPTO_MLX5=n 00:08:07.753 14:48:39 -- common/build_config.sh@74 -- # CONFIG_MAX_LCORES= 00:08:07.753 14:48:39 -- common/build_config.sh@75 -- # CONFIG_IPSEC_MB=n 00:08:07.753 14:48:39 -- common/build_config.sh@76 -- # CONFIG_DEBUG=y 00:08:07.753 14:48:39 -- common/build_config.sh@77 -- # CONFIG_DPDK_COMPRESSDEV=n 00:08:07.753 14:48:39 -- common/build_config.sh@78 -- # CONFIG_CROSS_PREFIX= 00:08:07.753 14:48:39 -- common/build_config.sh@79 -- # CONFIG_URING=n 00:08:07.753 14:48:39 -- common/autotest_common.sh@48 -- # source /home/vagrant/spdk_repo/spdk/test/common/applications.sh 00:08:07.753 14:48:39 -- common/applications.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/common/applications.sh 00:08:07.753 14:48:39 -- common/applications.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/common 00:08:07.753 14:48:39 -- common/applications.sh@8 -- # _root=/home/vagrant/spdk_repo/spdk/test/common 00:08:07.753 14:48:39 -- common/applications.sh@9 -- # _root=/home/vagrant/spdk_repo/spdk 00:08:07.753 14:48:39 -- common/applications.sh@10 -- # _app_dir=/home/vagrant/spdk_repo/spdk/build/bin 00:08:07.753 14:48:39 -- common/applications.sh@11 -- # _test_app_dir=/home/vagrant/spdk_repo/spdk/test/app 00:08:07.753 14:48:39 -- common/applications.sh@12 -- # _examples_dir=/home/vagrant/spdk_repo/spdk/build/examples 00:08:07.753 14:48:39 -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:08:07.753 14:48:39 -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:08:07.753 14:48:39 -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:08:07.754 14:48:39 -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:08:07.754 14:48:39 -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:08:07.754 14:48:39 -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:08:07.754 14:48:39 -- common/applications.sh@22 -- # [[ -e /home/vagrant/spdk_repo/spdk/include/spdk/config.h ]] 00:08:07.754 14:48:39 -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:08:07.754 #define SPDK_CONFIG_H 00:08:07.754 #define SPDK_CONFIG_APPS 1 00:08:07.754 #define SPDK_CONFIG_ARCH native 00:08:07.754 #undef SPDK_CONFIG_ASAN 00:08:07.754 #define SPDK_CONFIG_AVAHI 1 00:08:07.754 #undef SPDK_CONFIG_CET 00:08:07.754 #define SPDK_CONFIG_COVERAGE 1 00:08:07.754 #define SPDK_CONFIG_CROSS_PREFIX 00:08:07.754 #undef SPDK_CONFIG_CRYPTO 00:08:07.754 #undef SPDK_CONFIG_CRYPTO_MLX5 00:08:07.754 #undef SPDK_CONFIG_CUSTOMOCF 00:08:07.754 #undef SPDK_CONFIG_DAOS 00:08:07.754 #define SPDK_CONFIG_DAOS_DIR 00:08:07.754 #define SPDK_CONFIG_DEBUG 1 00:08:07.754 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:08:07.754 #define SPDK_CONFIG_DPDK_DIR /home/vagrant/spdk_repo/dpdk/build 00:08:07.754 #define SPDK_CONFIG_DPDK_INC_DIR //home/vagrant/spdk_repo/dpdk/build/include 00:08:07.754 #define SPDK_CONFIG_DPDK_LIB_DIR /home/vagrant/spdk_repo/dpdk/build/lib 00:08:07.754 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:08:07.754 #define SPDK_CONFIG_ENV /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:08:07.754 #define SPDK_CONFIG_EXAMPLES 1 00:08:07.754 #undef SPDK_CONFIG_FC 00:08:07.754 #define SPDK_CONFIG_FC_PATH 00:08:07.754 #define SPDK_CONFIG_FIO_PLUGIN 1 00:08:07.754 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:08:07.754 #undef SPDK_CONFIG_FUSE 00:08:07.754 #undef SPDK_CONFIG_FUZZER 00:08:07.754 #define SPDK_CONFIG_FUZZER_LIB 00:08:07.754 #define SPDK_CONFIG_GOLANG 1 00:08:07.754 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:08:07.754 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:08:07.754 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:08:07.754 #undef SPDK_CONFIG_HAVE_LIBBSD 00:08:07.754 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:08:07.754 #define SPDK_CONFIG_IDXD 1 00:08:07.754 #define SPDK_CONFIG_IDXD_KERNEL 1 00:08:07.754 #undef SPDK_CONFIG_IPSEC_MB 00:08:07.754 #define SPDK_CONFIG_IPSEC_MB_DIR 00:08:07.754 #define SPDK_CONFIG_ISAL 1 00:08:07.754 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:08:07.754 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:08:07.754 #define SPDK_CONFIG_LIBDIR 00:08:07.754 #undef SPDK_CONFIG_LTO 00:08:07.754 #define SPDK_CONFIG_MAX_LCORES 00:08:07.754 #define SPDK_CONFIG_NVME_CUSE 1 00:08:07.754 #undef SPDK_CONFIG_OCF 00:08:07.754 #define SPDK_CONFIG_OCF_PATH 00:08:07.754 #define SPDK_CONFIG_OPENSSL_PATH 00:08:07.754 #undef SPDK_CONFIG_PGO_CAPTURE 00:08:07.754 #undef SPDK_CONFIG_PGO_USE 00:08:07.754 #define SPDK_CONFIG_PREFIX /usr/local 00:08:07.754 #undef SPDK_CONFIG_RAID5F 00:08:07.754 #undef SPDK_CONFIG_RBD 00:08:07.754 #define SPDK_CONFIG_RDMA 1 00:08:07.754 #define SPDK_CONFIG_RDMA_PROV verbs 00:08:07.754 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:08:07.754 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:08:07.754 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:08:07.754 #define SPDK_CONFIG_SHARED 1 00:08:07.754 #undef SPDK_CONFIG_SMA 00:08:07.754 #define SPDK_CONFIG_TESTS 1 00:08:07.754 #undef SPDK_CONFIG_TSAN 00:08:07.754 #define SPDK_CONFIG_UBLK 1 00:08:07.754 #define SPDK_CONFIG_UBSAN 1 00:08:07.754 #undef SPDK_CONFIG_UNIT_TESTS 00:08:07.754 #undef SPDK_CONFIG_URING 00:08:07.754 #define SPDK_CONFIG_URING_PATH 00:08:07.754 #undef SPDK_CONFIG_URING_ZNS 00:08:07.754 #define SPDK_CONFIG_USDT 1 00:08:07.754 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:08:07.754 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:08:07.754 #undef SPDK_CONFIG_VFIO_USER 00:08:07.754 #define SPDK_CONFIG_VFIO_USER_DIR 00:08:07.754 #define SPDK_CONFIG_VHOST 1 00:08:07.754 #define SPDK_CONFIG_VIRTIO 1 00:08:07.754 #undef SPDK_CONFIG_VTUNE 00:08:07.754 #define SPDK_CONFIG_VTUNE_DIR 00:08:07.754 #define SPDK_CONFIG_WERROR 1 00:08:07.754 #define SPDK_CONFIG_WPDK_DIR 00:08:07.754 #undef SPDK_CONFIG_XNVME 00:08:07.754 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:08:07.754 14:48:39 -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:08:07.754 14:48:39 -- common/autotest_common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:07.754 14:48:39 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:07.754 14:48:39 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:07.754 14:48:39 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:07.754 14:48:39 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:07.754 14:48:39 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:07.754 14:48:39 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:07.754 14:48:39 -- paths/export.sh@5 -- # export PATH 00:08:07.754 14:48:39 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:07.754 14:48:39 -- common/autotest_common.sh@50 -- # source /home/vagrant/spdk_repo/spdk/scripts/perf/pm/common 00:08:07.754 14:48:39 -- pm/common@6 -- # dirname /home/vagrant/spdk_repo/spdk/scripts/perf/pm/common 00:08:07.754 14:48:39 -- pm/common@6 -- # readlink -f /home/vagrant/spdk_repo/spdk/scripts/perf/pm 00:08:07.754 14:48:39 -- pm/common@6 -- # _pmdir=/home/vagrant/spdk_repo/spdk/scripts/perf/pm 00:08:07.754 14:48:39 -- pm/common@7 -- # readlink -f /home/vagrant/spdk_repo/spdk/scripts/perf/pm/../../../ 00:08:07.754 14:48:39 -- pm/common@7 -- # _pmrootdir=/home/vagrant/spdk_repo/spdk 00:08:07.754 14:48:39 -- pm/common@16 -- # TEST_TAG=N/A 00:08:07.754 14:48:39 -- pm/common@17 -- # TEST_TAG_FILE=/home/vagrant/spdk_repo/spdk/.run_test_name 00:08:07.754 14:48:39 -- common/autotest_common.sh@52 -- # : 1 00:08:07.754 14:48:39 -- common/autotest_common.sh@53 -- # export RUN_NIGHTLY 00:08:07.754 14:48:39 -- common/autotest_common.sh@56 -- # : 0 00:08:07.754 14:48:39 -- common/autotest_common.sh@57 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:08:07.754 14:48:39 -- common/autotest_common.sh@58 -- # : 0 00:08:07.754 14:48:39 -- common/autotest_common.sh@59 -- # export SPDK_RUN_VALGRIND 00:08:07.754 14:48:39 -- common/autotest_common.sh@60 -- # : 1 00:08:07.754 14:48:39 -- common/autotest_common.sh@61 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:08:07.754 14:48:39 -- common/autotest_common.sh@62 -- # : 0 00:08:07.754 14:48:39 -- common/autotest_common.sh@63 -- # export SPDK_TEST_UNITTEST 00:08:07.754 14:48:39 -- common/autotest_common.sh@64 -- # : 00:08:07.754 14:48:39 -- common/autotest_common.sh@65 -- # export SPDK_TEST_AUTOBUILD 00:08:07.754 14:48:39 -- common/autotest_common.sh@66 -- # : 0 00:08:07.754 14:48:39 -- common/autotest_common.sh@67 -- # export SPDK_TEST_RELEASE_BUILD 00:08:07.754 14:48:39 -- common/autotest_common.sh@68 -- # : 0 00:08:07.754 14:48:39 -- common/autotest_common.sh@69 -- # export SPDK_TEST_ISAL 00:08:07.754 14:48:39 -- common/autotest_common.sh@70 -- # : 0 00:08:07.754 14:48:39 -- common/autotest_common.sh@71 -- # export SPDK_TEST_ISCSI 00:08:07.754 14:48:39 -- common/autotest_common.sh@72 -- # : 0 00:08:07.754 14:48:39 -- common/autotest_common.sh@73 -- # export SPDK_TEST_ISCSI_INITIATOR 00:08:07.754 14:48:39 -- common/autotest_common.sh@74 -- # : 0 00:08:07.754 14:48:39 -- common/autotest_common.sh@75 -- # export SPDK_TEST_NVME 00:08:07.754 14:48:39 -- common/autotest_common.sh@76 -- # : 0 00:08:07.754 14:48:39 -- common/autotest_common.sh@77 -- # export SPDK_TEST_NVME_PMR 00:08:07.754 14:48:39 -- common/autotest_common.sh@78 -- # : 0 00:08:07.754 14:48:39 -- common/autotest_common.sh@79 -- # export SPDK_TEST_NVME_BP 00:08:07.754 14:48:39 -- common/autotest_common.sh@80 -- # : 0 00:08:07.754 14:48:39 -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME_CLI 00:08:07.754 14:48:39 -- common/autotest_common.sh@82 -- # : 0 00:08:07.754 14:48:39 -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_CUSE 00:08:07.754 14:48:39 -- common/autotest_common.sh@84 -- # : 0 00:08:07.754 14:48:39 -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_FDP 00:08:07.754 14:48:39 -- common/autotest_common.sh@86 -- # : 1 00:08:07.754 14:48:39 -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVMF 00:08:07.754 14:48:39 -- common/autotest_common.sh@88 -- # : 0 00:08:07.754 14:48:39 -- common/autotest_common.sh@89 -- # export SPDK_TEST_VFIOUSER 00:08:07.754 14:48:39 -- common/autotest_common.sh@90 -- # : 0 00:08:07.754 14:48:39 -- common/autotest_common.sh@91 -- # export SPDK_TEST_VFIOUSER_QEMU 00:08:07.754 14:48:39 -- common/autotest_common.sh@92 -- # : 0 00:08:07.754 14:48:39 -- common/autotest_common.sh@93 -- # export SPDK_TEST_FUZZER 00:08:07.754 14:48:39 -- common/autotest_common.sh@94 -- # : 0 00:08:07.754 14:48:39 -- common/autotest_common.sh@95 -- # export SPDK_TEST_FUZZER_SHORT 00:08:07.754 14:48:39 -- common/autotest_common.sh@96 -- # : tcp 00:08:07.754 14:48:39 -- common/autotest_common.sh@97 -- # export SPDK_TEST_NVMF_TRANSPORT 00:08:07.754 14:48:39 -- common/autotest_common.sh@98 -- # : 0 00:08:07.754 14:48:39 -- common/autotest_common.sh@99 -- # export SPDK_TEST_RBD 00:08:07.754 14:48:39 -- common/autotest_common.sh@100 -- # : 0 00:08:07.754 14:48:39 -- common/autotest_common.sh@101 -- # export SPDK_TEST_VHOST 00:08:07.754 14:48:39 -- common/autotest_common.sh@102 -- # : 0 00:08:07.754 14:48:39 -- common/autotest_common.sh@103 -- # export SPDK_TEST_BLOCKDEV 00:08:07.754 14:48:39 -- common/autotest_common.sh@104 -- # : 0 00:08:07.754 14:48:39 -- common/autotest_common.sh@105 -- # export SPDK_TEST_IOAT 00:08:07.754 14:48:39 -- common/autotest_common.sh@106 -- # : 0 00:08:07.755 14:48:39 -- common/autotest_common.sh@107 -- # export SPDK_TEST_BLOBFS 00:08:07.755 14:48:39 -- common/autotest_common.sh@108 -- # : 0 00:08:07.755 14:48:39 -- common/autotest_common.sh@109 -- # export SPDK_TEST_VHOST_INIT 00:08:07.755 14:48:39 -- common/autotest_common.sh@110 -- # : 0 00:08:07.755 14:48:39 -- common/autotest_common.sh@111 -- # export SPDK_TEST_LVOL 00:08:07.755 14:48:39 -- common/autotest_common.sh@112 -- # : 0 00:08:07.755 14:48:39 -- common/autotest_common.sh@113 -- # export SPDK_TEST_VBDEV_COMPRESS 00:08:07.755 14:48:39 -- common/autotest_common.sh@114 -- # : 0 00:08:07.755 14:48:39 -- common/autotest_common.sh@115 -- # export SPDK_RUN_ASAN 00:08:07.755 14:48:39 -- common/autotest_common.sh@116 -- # : 1 00:08:07.755 14:48:39 -- common/autotest_common.sh@117 -- # export SPDK_RUN_UBSAN 00:08:07.755 14:48:39 -- common/autotest_common.sh@118 -- # : /home/vagrant/spdk_repo/dpdk/build 00:08:07.755 14:48:39 -- common/autotest_common.sh@119 -- # export SPDK_RUN_EXTERNAL_DPDK 00:08:07.755 14:48:39 -- common/autotest_common.sh@120 -- # : 0 00:08:07.755 14:48:39 -- common/autotest_common.sh@121 -- # export SPDK_RUN_NON_ROOT 00:08:07.755 14:48:39 -- common/autotest_common.sh@122 -- # : 0 00:08:07.755 14:48:39 -- common/autotest_common.sh@123 -- # export SPDK_TEST_CRYPTO 00:08:07.755 14:48:39 -- common/autotest_common.sh@124 -- # : 0 00:08:07.755 14:48:39 -- common/autotest_common.sh@125 -- # export SPDK_TEST_FTL 00:08:07.755 14:48:39 -- common/autotest_common.sh@126 -- # : 0 00:08:07.755 14:48:39 -- common/autotest_common.sh@127 -- # export SPDK_TEST_OCF 00:08:07.755 14:48:39 -- common/autotest_common.sh@128 -- # : 0 00:08:07.755 14:48:39 -- common/autotest_common.sh@129 -- # export SPDK_TEST_VMD 00:08:07.755 14:48:39 -- common/autotest_common.sh@130 -- # : 0 00:08:07.755 14:48:39 -- common/autotest_common.sh@131 -- # export SPDK_TEST_OPAL 00:08:07.755 14:48:39 -- common/autotest_common.sh@132 -- # : v23.11 00:08:07.755 14:48:39 -- common/autotest_common.sh@133 -- # export SPDK_TEST_NATIVE_DPDK 00:08:07.755 14:48:39 -- common/autotest_common.sh@134 -- # : true 00:08:07.755 14:48:39 -- common/autotest_common.sh@135 -- # export SPDK_AUTOTEST_X 00:08:07.755 14:48:39 -- common/autotest_common.sh@136 -- # : 0 00:08:07.755 14:48:39 -- common/autotest_common.sh@137 -- # export SPDK_TEST_RAID5 00:08:07.755 14:48:39 -- common/autotest_common.sh@138 -- # : 0 00:08:07.755 14:48:39 -- common/autotest_common.sh@139 -- # export SPDK_TEST_URING 00:08:07.755 14:48:39 -- common/autotest_common.sh@140 -- # : 1 00:08:07.755 14:48:39 -- common/autotest_common.sh@141 -- # export SPDK_TEST_USDT 00:08:07.755 14:48:39 -- common/autotest_common.sh@142 -- # : 0 00:08:07.755 14:48:39 -- common/autotest_common.sh@143 -- # export SPDK_TEST_USE_IGB_UIO 00:08:07.755 14:48:39 -- common/autotest_common.sh@144 -- # : 0 00:08:07.755 14:48:39 -- common/autotest_common.sh@145 -- # export SPDK_TEST_SCHEDULER 00:08:07.755 14:48:39 -- common/autotest_common.sh@146 -- # : 0 00:08:07.755 14:48:39 -- common/autotest_common.sh@147 -- # export SPDK_TEST_SCANBUILD 00:08:07.755 14:48:39 -- common/autotest_common.sh@148 -- # : 00:08:07.755 14:48:39 -- common/autotest_common.sh@149 -- # export SPDK_TEST_NVMF_NICS 00:08:07.755 14:48:39 -- common/autotest_common.sh@150 -- # : 0 00:08:07.755 14:48:39 -- common/autotest_common.sh@151 -- # export SPDK_TEST_SMA 00:08:07.755 14:48:39 -- common/autotest_common.sh@152 -- # : 0 00:08:07.755 14:48:39 -- common/autotest_common.sh@153 -- # export SPDK_TEST_DAOS 00:08:07.755 14:48:39 -- common/autotest_common.sh@154 -- # : 0 00:08:07.755 14:48:39 -- common/autotest_common.sh@155 -- # export SPDK_TEST_XNVME 00:08:07.755 14:48:39 -- common/autotest_common.sh@156 -- # : 0 00:08:07.755 14:48:39 -- common/autotest_common.sh@157 -- # export SPDK_TEST_ACCEL_DSA 00:08:07.755 14:48:39 -- common/autotest_common.sh@158 -- # : 0 00:08:07.755 14:48:39 -- common/autotest_common.sh@159 -- # export SPDK_TEST_ACCEL_IAA 00:08:07.755 14:48:39 -- common/autotest_common.sh@160 -- # : 0 00:08:07.755 14:48:39 -- common/autotest_common.sh@161 -- # export SPDK_TEST_ACCEL_IOAT 00:08:07.755 14:48:39 -- common/autotest_common.sh@163 -- # : 00:08:07.755 14:48:39 -- common/autotest_common.sh@164 -- # export SPDK_TEST_FUZZER_TARGET 00:08:07.755 14:48:39 -- common/autotest_common.sh@165 -- # : 1 00:08:07.755 14:48:39 -- common/autotest_common.sh@166 -- # export SPDK_TEST_NVMF_MDNS 00:08:07.755 14:48:39 -- common/autotest_common.sh@167 -- # : 1 00:08:07.755 14:48:39 -- common/autotest_common.sh@168 -- # export SPDK_JSONRPC_GO_CLIENT 00:08:07.755 14:48:39 -- common/autotest_common.sh@171 -- # export SPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/lib 00:08:07.755 14:48:39 -- common/autotest_common.sh@171 -- # SPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/lib 00:08:07.755 14:48:39 -- common/autotest_common.sh@172 -- # export DPDK_LIB_DIR=/home/vagrant/spdk_repo/dpdk/build/lib 00:08:07.755 14:48:39 -- common/autotest_common.sh@172 -- # DPDK_LIB_DIR=/home/vagrant/spdk_repo/dpdk/build/lib 00:08:07.755 14:48:39 -- common/autotest_common.sh@173 -- # export VFIO_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:08:07.755 14:48:39 -- common/autotest_common.sh@173 -- # VFIO_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:08:07.755 14:48:39 -- common/autotest_common.sh@174 -- # export LD_LIBRARY_PATH=:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:08:07.755 14:48:39 -- common/autotest_common.sh@174 -- # LD_LIBRARY_PATH=:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:08:07.755 14:48:39 -- common/autotest_common.sh@177 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:08:07.755 14:48:39 -- common/autotest_common.sh@177 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:08:07.755 14:48:39 -- common/autotest_common.sh@181 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:08:07.755 14:48:39 -- common/autotest_common.sh@181 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:08:07.755 14:48:39 -- common/autotest_common.sh@185 -- # export PYTHONDONTWRITEBYTECODE=1 00:08:07.755 14:48:39 -- common/autotest_common.sh@185 -- # PYTHONDONTWRITEBYTECODE=1 00:08:07.755 14:48:39 -- common/autotest_common.sh@189 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:08:07.755 14:48:39 -- common/autotest_common.sh@189 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:08:07.755 14:48:39 -- common/autotest_common.sh@190 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:08:07.755 14:48:39 -- common/autotest_common.sh@190 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:08:07.755 14:48:39 -- common/autotest_common.sh@194 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:08:07.755 14:48:39 -- common/autotest_common.sh@195 -- # rm -rf /var/tmp/asan_suppression_file 00:08:07.755 14:48:39 -- common/autotest_common.sh@196 -- # cat 00:08:07.755 14:48:39 -- common/autotest_common.sh@222 -- # echo leak:libfuse3.so 00:08:07.755 14:48:39 -- common/autotest_common.sh@224 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:08:07.755 14:48:39 -- common/autotest_common.sh@224 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:08:07.755 14:48:39 -- common/autotest_common.sh@226 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:08:07.755 14:48:39 -- common/autotest_common.sh@226 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:08:07.755 14:48:39 -- common/autotest_common.sh@228 -- # '[' -z /var/spdk/dependencies ']' 00:08:07.755 14:48:39 -- common/autotest_common.sh@231 -- # export DEPENDENCY_DIR 00:08:07.755 14:48:39 -- common/autotest_common.sh@235 -- # export SPDK_BIN_DIR=/home/vagrant/spdk_repo/spdk/build/bin 00:08:07.755 14:48:39 -- common/autotest_common.sh@235 -- # SPDK_BIN_DIR=/home/vagrant/spdk_repo/spdk/build/bin 00:08:07.755 14:48:39 -- common/autotest_common.sh@236 -- # export SPDK_EXAMPLE_DIR=/home/vagrant/spdk_repo/spdk/build/examples 00:08:07.755 14:48:39 -- common/autotest_common.sh@236 -- # SPDK_EXAMPLE_DIR=/home/vagrant/spdk_repo/spdk/build/examples 00:08:07.755 14:48:39 -- common/autotest_common.sh@239 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:08:07.755 14:48:39 -- common/autotest_common.sh@239 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:08:07.755 14:48:39 -- common/autotest_common.sh@240 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:08:07.755 14:48:39 -- common/autotest_common.sh@240 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:08:07.755 14:48:39 -- common/autotest_common.sh@242 -- # export AR_TOOL=/home/vagrant/spdk_repo/spdk/scripts/ar-xnvme-fixer 00:08:07.755 14:48:39 -- common/autotest_common.sh@242 -- # AR_TOOL=/home/vagrant/spdk_repo/spdk/scripts/ar-xnvme-fixer 00:08:07.755 14:48:39 -- common/autotest_common.sh@245 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:08:07.755 14:48:39 -- common/autotest_common.sh@245 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:08:07.755 14:48:39 -- common/autotest_common.sh@247 -- # _LCOV_MAIN=0 00:08:07.755 14:48:39 -- common/autotest_common.sh@248 -- # _LCOV_LLVM=1 00:08:07.755 14:48:39 -- common/autotest_common.sh@249 -- # _LCOV= 00:08:07.755 14:48:39 -- common/autotest_common.sh@250 -- # [[ '' == *clang* ]] 00:08:07.755 14:48:39 -- common/autotest_common.sh@250 -- # [[ 0 -eq 1 ]] 00:08:07.755 14:48:39 -- common/autotest_common.sh@252 -- # _lcov_opt[_LCOV_LLVM]='--gcov-tool /home/vagrant/spdk_repo/spdk/test/fuzz/llvm/llvm-gcov.sh' 00:08:07.755 14:48:39 -- common/autotest_common.sh@253 -- # _lcov_opt[_LCOV_MAIN]= 00:08:07.755 14:48:39 -- common/autotest_common.sh@255 -- # lcov_opt= 00:08:07.755 14:48:39 -- common/autotest_common.sh@258 -- # '[' 0 -eq 0 ']' 00:08:07.755 14:48:39 -- common/autotest_common.sh@259 -- # export valgrind= 00:08:07.755 14:48:39 -- common/autotest_common.sh@259 -- # valgrind= 00:08:07.755 14:48:39 -- common/autotest_common.sh@265 -- # uname -s 00:08:07.755 14:48:39 -- common/autotest_common.sh@265 -- # '[' Linux = Linux ']' 00:08:07.755 14:48:39 -- common/autotest_common.sh@266 -- # HUGEMEM=4096 00:08:07.755 14:48:39 -- common/autotest_common.sh@267 -- # export CLEAR_HUGE=yes 00:08:07.755 14:48:39 -- common/autotest_common.sh@267 -- # CLEAR_HUGE=yes 00:08:07.755 14:48:39 -- common/autotest_common.sh@268 -- # [[ 0 -eq 1 ]] 00:08:07.755 14:48:39 -- common/autotest_common.sh@268 -- # [[ 0 -eq 1 ]] 00:08:07.755 14:48:39 -- common/autotest_common.sh@275 -- # MAKE=make 00:08:07.755 14:48:39 -- common/autotest_common.sh@276 -- # MAKEFLAGS=-j10 00:08:07.755 14:48:39 -- common/autotest_common.sh@292 -- # export HUGEMEM=4096 00:08:07.756 14:48:39 -- common/autotest_common.sh@292 -- # HUGEMEM=4096 00:08:07.756 14:48:39 -- common/autotest_common.sh@294 -- # '[' -z /home/vagrant/spdk_repo/spdk/../output ']' 00:08:07.756 14:48:39 -- common/autotest_common.sh@299 -- # NO_HUGE=() 00:08:07.756 14:48:39 -- common/autotest_common.sh@300 -- # TEST_MODE= 00:08:07.756 14:48:39 -- common/autotest_common.sh@301 -- # for i in "$@" 00:08:07.756 14:48:39 -- common/autotest_common.sh@302 -- # case "$i" in 00:08:07.756 14:48:39 -- common/autotest_common.sh@307 -- # TEST_TRANSPORT=tcp 00:08:07.756 14:48:39 -- common/autotest_common.sh@319 -- # [[ -z 72368 ]] 00:08:07.756 14:48:39 -- common/autotest_common.sh@319 -- # kill -0 72368 00:08:07.756 14:48:39 -- common/autotest_common.sh@1675 -- # set_test_storage 2147483648 00:08:07.756 14:48:39 -- common/autotest_common.sh@329 -- # [[ -v testdir ]] 00:08:07.756 14:48:39 -- common/autotest_common.sh@331 -- # local requested_size=2147483648 00:08:07.756 14:48:39 -- common/autotest_common.sh@332 -- # local mount target_dir 00:08:07.756 14:48:39 -- common/autotest_common.sh@334 -- # local -A mounts fss sizes avails uses 00:08:07.756 14:48:39 -- common/autotest_common.sh@335 -- # local source fs size avail mount use 00:08:07.756 14:48:39 -- common/autotest_common.sh@337 -- # local storage_fallback storage_candidates 00:08:07.756 14:48:39 -- common/autotest_common.sh@339 -- # mktemp -udt spdk.XXXXXX 00:08:07.756 14:48:39 -- common/autotest_common.sh@339 -- # storage_fallback=/tmp/spdk.RtTO4F 00:08:07.756 14:48:39 -- common/autotest_common.sh@344 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:08:07.756 14:48:39 -- common/autotest_common.sh@346 -- # [[ -n '' ]] 00:08:07.756 14:48:39 -- common/autotest_common.sh@351 -- # [[ -n '' ]] 00:08:07.756 14:48:39 -- common/autotest_common.sh@356 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/nvmf/target /tmp/spdk.RtTO4F/tests/target /tmp/spdk.RtTO4F 00:08:07.756 14:48:39 -- common/autotest_common.sh@359 -- # requested_size=2214592512 00:08:07.756 14:48:39 -- common/autotest_common.sh@361 -- # read -r source fs size use avail _ mount 00:08:07.756 14:48:39 -- common/autotest_common.sh@328 -- # df -T 00:08:07.756 14:48:39 -- common/autotest_common.sh@328 -- # grep -v Filesystem 00:08:07.756 14:48:39 -- common/autotest_common.sh@362 -- # mounts["$mount"]=/dev/vda5 00:08:07.756 14:48:39 -- common/autotest_common.sh@362 -- # fss["$mount"]=btrfs 00:08:07.756 14:48:39 -- common/autotest_common.sh@363 -- # avails["$mount"]=13293772800 00:08:07.756 14:48:39 -- common/autotest_common.sh@363 -- # sizes["$mount"]=20314062848 00:08:07.756 14:48:39 -- common/autotest_common.sh@364 -- # uses["$mount"]=6289940480 00:08:07.756 14:48:39 -- common/autotest_common.sh@361 -- # read -r source fs size use avail _ mount 00:08:07.756 14:48:39 -- common/autotest_common.sh@362 -- # mounts["$mount"]=devtmpfs 00:08:07.756 14:48:39 -- common/autotest_common.sh@362 -- # fss["$mount"]=devtmpfs 00:08:07.756 14:48:39 -- common/autotest_common.sh@363 -- # avails["$mount"]=4194304 00:08:07.756 14:48:39 -- common/autotest_common.sh@363 -- # sizes["$mount"]=4194304 00:08:07.756 14:48:39 -- common/autotest_common.sh@364 -- # uses["$mount"]=0 00:08:07.756 14:48:39 -- common/autotest_common.sh@361 -- # read -r source fs size use avail _ mount 00:08:07.756 14:48:39 -- common/autotest_common.sh@362 -- # mounts["$mount"]=tmpfs 00:08:07.756 14:48:39 -- common/autotest_common.sh@362 -- # fss["$mount"]=tmpfs 00:08:07.756 14:48:39 -- common/autotest_common.sh@363 -- # avails["$mount"]=6265163776 00:08:07.756 14:48:39 -- common/autotest_common.sh@363 -- # sizes["$mount"]=6266421248 00:08:07.756 14:48:39 -- common/autotest_common.sh@364 -- # uses["$mount"]=1257472 00:08:07.756 14:48:39 -- common/autotest_common.sh@361 -- # read -r source fs size use avail _ mount 00:08:07.756 14:48:39 -- common/autotest_common.sh@362 -- # mounts["$mount"]=tmpfs 00:08:07.756 14:48:39 -- common/autotest_common.sh@362 -- # fss["$mount"]=tmpfs 00:08:07.756 14:48:39 -- common/autotest_common.sh@363 -- # avails["$mount"]=2493755392 00:08:07.756 14:48:39 -- common/autotest_common.sh@363 -- # sizes["$mount"]=2506571776 00:08:07.756 14:48:39 -- common/autotest_common.sh@364 -- # uses["$mount"]=12816384 00:08:07.756 14:48:39 -- common/autotest_common.sh@361 -- # read -r source fs size use avail _ mount 00:08:07.756 14:48:39 -- common/autotest_common.sh@362 -- # mounts["$mount"]=/dev/vda5 00:08:07.756 14:48:39 -- common/autotest_common.sh@362 -- # fss["$mount"]=btrfs 00:08:07.756 14:48:39 -- common/autotest_common.sh@363 -- # avails["$mount"]=13293772800 00:08:07.756 14:48:39 -- common/autotest_common.sh@363 -- # sizes["$mount"]=20314062848 00:08:07.756 14:48:39 -- common/autotest_common.sh@364 -- # uses["$mount"]=6289940480 00:08:07.756 14:48:39 -- common/autotest_common.sh@361 -- # read -r source fs size use avail _ mount 00:08:07.756 14:48:39 -- common/autotest_common.sh@362 -- # mounts["$mount"]=/dev/vda2 00:08:07.756 14:48:39 -- common/autotest_common.sh@362 -- # fss["$mount"]=ext4 00:08:07.756 14:48:39 -- common/autotest_common.sh@363 -- # avails["$mount"]=840085504 00:08:07.756 14:48:39 -- common/autotest_common.sh@363 -- # sizes["$mount"]=1012768768 00:08:07.756 14:48:39 -- common/autotest_common.sh@364 -- # uses["$mount"]=103477248 00:08:07.756 14:48:39 -- common/autotest_common.sh@361 -- # read -r source fs size use avail _ mount 00:08:07.756 14:48:39 -- common/autotest_common.sh@362 -- # mounts["$mount"]=tmpfs 00:08:07.756 14:48:39 -- common/autotest_common.sh@362 -- # fss["$mount"]=tmpfs 00:08:07.756 14:48:39 -- common/autotest_common.sh@363 -- # avails["$mount"]=6266286080 00:08:07.756 14:48:39 -- common/autotest_common.sh@363 -- # sizes["$mount"]=6266425344 00:08:07.756 14:48:39 -- common/autotest_common.sh@364 -- # uses["$mount"]=139264 00:08:07.756 14:48:39 -- common/autotest_common.sh@361 -- # read -r source fs size use avail _ mount 00:08:07.756 14:48:39 -- common/autotest_common.sh@362 -- # mounts["$mount"]=/dev/vda3 00:08:07.756 14:48:39 -- common/autotest_common.sh@362 -- # fss["$mount"]=vfat 00:08:07.756 14:48:39 -- common/autotest_common.sh@363 -- # avails["$mount"]=91617280 00:08:07.756 14:48:39 -- common/autotest_common.sh@363 -- # sizes["$mount"]=104607744 00:08:07.756 14:48:39 -- common/autotest_common.sh@364 -- # uses["$mount"]=12990464 00:08:07.756 14:48:39 -- common/autotest_common.sh@361 -- # read -r source fs size use avail _ mount 00:08:07.756 14:48:39 -- common/autotest_common.sh@362 -- # mounts["$mount"]=tmpfs 00:08:07.756 14:48:39 -- common/autotest_common.sh@362 -- # fss["$mount"]=tmpfs 00:08:07.756 14:48:39 -- common/autotest_common.sh@363 -- # avails["$mount"]=1253269504 00:08:07.756 14:48:39 -- common/autotest_common.sh@363 -- # sizes["$mount"]=1253281792 00:08:07.756 14:48:39 -- common/autotest_common.sh@364 -- # uses["$mount"]=12288 00:08:07.756 14:48:39 -- common/autotest_common.sh@361 -- # read -r source fs size use avail _ mount 00:08:07.756 14:48:39 -- common/autotest_common.sh@362 -- # mounts["$mount"]=:/mnt/jenkins_nvme/jenkins/workspace/nvmf-tcp-vg-autotest/fedora39-libvirt/output 00:08:07.756 14:48:39 -- common/autotest_common.sh@362 -- # fss["$mount"]=fuse.sshfs 00:08:07.756 14:48:39 -- common/autotest_common.sh@363 -- # avails["$mount"]=98360332288 00:08:07.756 14:48:39 -- common/autotest_common.sh@363 -- # sizes["$mount"]=105088212992 00:08:07.756 14:48:39 -- common/autotest_common.sh@364 -- # uses["$mount"]=1342447616 00:08:07.756 14:48:39 -- common/autotest_common.sh@361 -- # read -r source fs size use avail _ mount 00:08:07.756 14:48:39 -- common/autotest_common.sh@367 -- # printf '* Looking for test storage...\n' 00:08:07.756 * Looking for test storage... 00:08:07.756 14:48:39 -- common/autotest_common.sh@369 -- # local target_space new_size 00:08:07.756 14:48:39 -- common/autotest_common.sh@370 -- # for target_dir in "${storage_candidates[@]}" 00:08:07.756 14:48:39 -- common/autotest_common.sh@373 -- # df /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:08:07.756 14:48:39 -- common/autotest_common.sh@373 -- # awk '$1 !~ /Filesystem/{print $6}' 00:08:07.756 14:48:39 -- common/autotest_common.sh@373 -- # mount=/home 00:08:07.756 14:48:39 -- common/autotest_common.sh@375 -- # target_space=13293772800 00:08:07.756 14:48:39 -- common/autotest_common.sh@376 -- # (( target_space == 0 || target_space < requested_size )) 00:08:07.756 14:48:39 -- common/autotest_common.sh@379 -- # (( target_space >= requested_size )) 00:08:07.756 14:48:39 -- common/autotest_common.sh@381 -- # [[ btrfs == tmpfs ]] 00:08:07.756 14:48:39 -- common/autotest_common.sh@381 -- # [[ btrfs == ramfs ]] 00:08:07.756 14:48:39 -- common/autotest_common.sh@381 -- # [[ /home == / ]] 00:08:07.756 14:48:39 -- common/autotest_common.sh@388 -- # export SPDK_TEST_STORAGE=/home/vagrant/spdk_repo/spdk/test/nvmf/target 00:08:07.756 14:48:39 -- common/autotest_common.sh@388 -- # SPDK_TEST_STORAGE=/home/vagrant/spdk_repo/spdk/test/nvmf/target 00:08:07.756 14:48:39 -- common/autotest_common.sh@389 -- # printf '* Found test storage at %s\n' /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:08:07.756 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:08:07.756 14:48:39 -- common/autotest_common.sh@390 -- # return 0 00:08:07.756 14:48:39 -- common/autotest_common.sh@1677 -- # set -o errtrace 00:08:07.756 14:48:39 -- common/autotest_common.sh@1678 -- # shopt -s extdebug 00:08:07.756 14:48:39 -- common/autotest_common.sh@1679 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:08:07.756 14:48:39 -- common/autotest_common.sh@1681 -- # PS4=' \t -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:08:07.756 14:48:39 -- common/autotest_common.sh@1682 -- # true 00:08:07.756 14:48:39 -- common/autotest_common.sh@1684 -- # xtrace_fd 00:08:07.756 14:48:39 -- common/autotest_common.sh@25 -- # [[ -n 14 ]] 00:08:07.756 14:48:39 -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/14 ]] 00:08:07.756 14:48:39 -- common/autotest_common.sh@27 -- # exec 00:08:07.756 14:48:39 -- common/autotest_common.sh@29 -- # exec 00:08:07.756 14:48:39 -- common/autotest_common.sh@31 -- # xtrace_restore 00:08:07.756 14:48:39 -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:08:07.756 14:48:39 -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:08:07.756 14:48:39 -- common/autotest_common.sh@18 -- # set -x 00:08:07.756 14:48:39 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:08:07.756 14:48:39 -- common/autotest_common.sh@1690 -- # lcov --version 00:08:07.756 14:48:39 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:08:07.756 14:48:39 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:08:07.756 14:48:39 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:08:07.756 14:48:39 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:08:07.756 14:48:39 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:08:07.756 14:48:39 -- scripts/common.sh@335 -- # IFS=.-: 00:08:07.756 14:48:39 -- scripts/common.sh@335 -- # read -ra ver1 00:08:07.756 14:48:39 -- scripts/common.sh@336 -- # IFS=.-: 00:08:07.756 14:48:39 -- scripts/common.sh@336 -- # read -ra ver2 00:08:07.756 14:48:39 -- scripts/common.sh@337 -- # local 'op=<' 00:08:07.756 14:48:39 -- scripts/common.sh@339 -- # ver1_l=2 00:08:07.756 14:48:39 -- scripts/common.sh@340 -- # ver2_l=1 00:08:07.756 14:48:39 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:08:07.756 14:48:39 -- scripts/common.sh@343 -- # case "$op" in 00:08:07.756 14:48:39 -- scripts/common.sh@344 -- # : 1 00:08:07.756 14:48:39 -- scripts/common.sh@363 -- # (( v = 0 )) 00:08:07.756 14:48:39 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:07.756 14:48:39 -- scripts/common.sh@364 -- # decimal 1 00:08:07.756 14:48:39 -- scripts/common.sh@352 -- # local d=1 00:08:07.756 14:48:39 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:07.756 14:48:39 -- scripts/common.sh@354 -- # echo 1 00:08:07.756 14:48:39 -- scripts/common.sh@364 -- # ver1[v]=1 00:08:07.756 14:48:39 -- scripts/common.sh@365 -- # decimal 2 00:08:07.756 14:48:39 -- scripts/common.sh@352 -- # local d=2 00:08:07.757 14:48:39 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:07.757 14:48:39 -- scripts/common.sh@354 -- # echo 2 00:08:07.757 14:48:39 -- scripts/common.sh@365 -- # ver2[v]=2 00:08:07.757 14:48:39 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:08:07.757 14:48:39 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:08:07.757 14:48:39 -- scripts/common.sh@367 -- # return 0 00:08:07.757 14:48:39 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:07.757 14:48:39 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:08:07.757 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:07.757 --rc genhtml_branch_coverage=1 00:08:07.757 --rc genhtml_function_coverage=1 00:08:07.757 --rc genhtml_legend=1 00:08:07.757 --rc geninfo_all_blocks=1 00:08:07.757 --rc geninfo_unexecuted_blocks=1 00:08:07.757 00:08:07.757 ' 00:08:07.757 14:48:39 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:08:07.757 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:07.757 --rc genhtml_branch_coverage=1 00:08:07.757 --rc genhtml_function_coverage=1 00:08:07.757 --rc genhtml_legend=1 00:08:07.757 --rc geninfo_all_blocks=1 00:08:07.757 --rc geninfo_unexecuted_blocks=1 00:08:07.757 00:08:07.757 ' 00:08:07.757 14:48:39 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:08:07.757 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:07.757 --rc genhtml_branch_coverage=1 00:08:07.757 --rc genhtml_function_coverage=1 00:08:07.757 --rc genhtml_legend=1 00:08:07.757 --rc geninfo_all_blocks=1 00:08:07.757 --rc geninfo_unexecuted_blocks=1 00:08:07.757 00:08:07.757 ' 00:08:07.757 14:48:39 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:08:07.757 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:07.757 --rc genhtml_branch_coverage=1 00:08:07.757 --rc genhtml_function_coverage=1 00:08:07.757 --rc genhtml_legend=1 00:08:07.757 --rc geninfo_all_blocks=1 00:08:07.757 --rc geninfo_unexecuted_blocks=1 00:08:07.757 00:08:07.757 ' 00:08:07.757 14:48:39 -- target/filesystem.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:08:07.757 14:48:39 -- nvmf/common.sh@7 -- # uname -s 00:08:07.757 14:48:39 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:07.757 14:48:39 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:07.757 14:48:39 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:07.757 14:48:39 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:07.757 14:48:39 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:07.757 14:48:39 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:07.757 14:48:39 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:07.757 14:48:39 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:07.757 14:48:39 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:07.757 14:48:39 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:07.757 14:48:39 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:2d843004-a791-47f3-8dd7-3d04462c368b 00:08:07.757 14:48:39 -- nvmf/common.sh@18 -- # NVME_HOSTID=2d843004-a791-47f3-8dd7-3d04462c368b 00:08:07.757 14:48:39 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:07.757 14:48:39 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:07.757 14:48:39 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:08:07.757 14:48:39 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:07.757 14:48:39 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:07.757 14:48:39 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:07.757 14:48:39 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:07.757 14:48:39 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:07.757 14:48:39 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:07.757 14:48:39 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:07.757 14:48:39 -- paths/export.sh@5 -- # export PATH 00:08:07.757 14:48:39 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:07.757 14:48:39 -- nvmf/common.sh@46 -- # : 0 00:08:07.757 14:48:39 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:08:07.757 14:48:39 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:08:07.757 14:48:39 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:08:07.757 14:48:39 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:07.757 14:48:39 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:07.757 14:48:39 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:08:07.757 14:48:39 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:08:07.757 14:48:39 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:08:07.757 14:48:39 -- target/filesystem.sh@12 -- # MALLOC_BDEV_SIZE=512 00:08:07.757 14:48:39 -- target/filesystem.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:08:07.757 14:48:39 -- target/filesystem.sh@15 -- # nvmftestinit 00:08:07.757 14:48:39 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:08:07.757 14:48:39 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:07.757 14:48:39 -- nvmf/common.sh@436 -- # prepare_net_devs 00:08:07.757 14:48:39 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:08:07.757 14:48:39 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:08:07.757 14:48:39 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:07.757 14:48:39 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:07.757 14:48:39 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:07.757 14:48:39 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:08:07.757 14:48:39 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:08:07.757 14:48:39 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:08:07.757 14:48:39 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:08:07.757 14:48:39 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:08:07.757 14:48:39 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:08:07.757 14:48:39 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:07.757 14:48:39 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:07.757 14:48:39 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:08:07.757 14:48:39 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:08:07.757 14:48:39 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:08:07.757 14:48:39 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:08:07.757 14:48:39 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:08:07.757 14:48:39 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:07.757 14:48:39 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:08:07.757 14:48:39 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:08:07.757 14:48:39 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:08:07.757 14:48:39 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:08:07.757 14:48:39 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:08:07.757 14:48:39 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:08:07.757 Cannot find device "nvmf_tgt_br" 00:08:07.757 14:48:39 -- nvmf/common.sh@154 -- # true 00:08:07.757 14:48:39 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:08:07.757 Cannot find device "nvmf_tgt_br2" 00:08:07.757 14:48:39 -- nvmf/common.sh@155 -- # true 00:08:07.757 14:48:39 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:08:07.757 14:48:39 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:08:07.758 Cannot find device "nvmf_tgt_br" 00:08:07.758 14:48:39 -- nvmf/common.sh@157 -- # true 00:08:07.758 14:48:39 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:08:07.758 Cannot find device "nvmf_tgt_br2" 00:08:07.758 14:48:39 -- nvmf/common.sh@158 -- # true 00:08:07.758 14:48:39 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:08:07.758 14:48:39 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:08:07.758 14:48:39 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:08:07.758 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:07.758 14:48:39 -- nvmf/common.sh@161 -- # true 00:08:07.758 14:48:39 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:08:07.758 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:07.758 14:48:39 -- nvmf/common.sh@162 -- # true 00:08:07.758 14:48:39 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:08:07.758 14:48:39 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:08:07.758 14:48:39 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:08:07.758 14:48:39 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:08:07.758 14:48:39 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:08:07.758 14:48:39 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:08:07.758 14:48:39 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:08:07.758 14:48:39 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:08:07.758 14:48:39 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:08:07.758 14:48:39 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:08:07.758 14:48:39 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:08:07.758 14:48:39 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:08:07.758 14:48:39 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:08:07.758 14:48:39 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:08:07.758 14:48:39 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:08:07.758 14:48:39 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:08:07.758 14:48:39 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:08:07.758 14:48:39 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:08:07.758 14:48:39 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:08:07.758 14:48:39 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:08:07.758 14:48:39 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:08:07.758 14:48:39 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:08:07.758 14:48:39 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:08:07.758 14:48:39 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:08:07.758 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:07.758 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.096 ms 00:08:07.758 00:08:07.758 --- 10.0.0.2 ping statistics --- 00:08:07.758 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:07.758 rtt min/avg/max/mdev = 0.096/0.096/0.096/0.000 ms 00:08:07.758 14:48:39 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:08:07.758 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:08:07.758 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.058 ms 00:08:07.758 00:08:07.758 --- 10.0.0.3 ping statistics --- 00:08:07.758 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:07.758 rtt min/avg/max/mdev = 0.058/0.058/0.058/0.000 ms 00:08:07.758 14:48:39 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:08:07.758 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:07.758 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.035 ms 00:08:07.758 00:08:07.758 --- 10.0.0.1 ping statistics --- 00:08:07.758 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:07.758 rtt min/avg/max/mdev = 0.035/0.035/0.035/0.000 ms 00:08:07.758 14:48:39 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:07.758 14:48:39 -- nvmf/common.sh@421 -- # return 0 00:08:07.758 14:48:39 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:08:07.758 14:48:39 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:07.758 14:48:39 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:08:07.758 14:48:39 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:08:07.758 14:48:39 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:07.758 14:48:39 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:08:07.758 14:48:39 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:08:07.758 14:48:39 -- target/filesystem.sh@105 -- # run_test nvmf_filesystem_no_in_capsule nvmf_filesystem_part 0 00:08:07.758 14:48:39 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:08:07.758 14:48:39 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:07.758 14:48:39 -- common/autotest_common.sh@10 -- # set +x 00:08:07.758 ************************************ 00:08:07.758 START TEST nvmf_filesystem_no_in_capsule 00:08:07.758 ************************************ 00:08:07.758 14:48:39 -- common/autotest_common.sh@1114 -- # nvmf_filesystem_part 0 00:08:07.758 14:48:39 -- target/filesystem.sh@47 -- # in_capsule=0 00:08:07.758 14:48:39 -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:08:07.758 14:48:39 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:08:07.758 14:48:39 -- common/autotest_common.sh@722 -- # xtrace_disable 00:08:07.758 14:48:39 -- common/autotest_common.sh@10 -- # set +x 00:08:07.758 14:48:39 -- nvmf/common.sh@469 -- # nvmfpid=72550 00:08:07.758 14:48:39 -- nvmf/common.sh@470 -- # waitforlisten 72550 00:08:07.758 14:48:39 -- common/autotest_common.sh@829 -- # '[' -z 72550 ']' 00:08:07.758 14:48:39 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:08:07.758 14:48:39 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:07.758 14:48:39 -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:07.758 14:48:39 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:07.758 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:07.758 14:48:39 -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:07.758 14:48:39 -- common/autotest_common.sh@10 -- # set +x 00:08:07.758 [2024-12-01 14:48:40.007458] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:08:07.758 [2024-12-01 14:48:40.007558] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:07.758 [2024-12-01 14:48:40.143033] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:07.758 [2024-12-01 14:48:40.197985] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:08:07.758 [2024-12-01 14:48:40.198119] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:07.758 [2024-12-01 14:48:40.198132] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:07.758 [2024-12-01 14:48:40.198141] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:07.758 [2024-12-01 14:48:40.198516] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:08:07.758 [2024-12-01 14:48:40.198667] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:08:07.758 [2024-12-01 14:48:40.199524] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:08:07.758 [2024-12-01 14:48:40.199568] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:08.018 14:48:41 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:08.018 14:48:41 -- common/autotest_common.sh@862 -- # return 0 00:08:08.018 14:48:41 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:08:08.018 14:48:41 -- common/autotest_common.sh@728 -- # xtrace_disable 00:08:08.018 14:48:41 -- common/autotest_common.sh@10 -- # set +x 00:08:08.018 14:48:41 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:08.018 14:48:41 -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:08:08.018 14:48:41 -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:08:08.018 14:48:41 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:08.018 14:48:41 -- common/autotest_common.sh@10 -- # set +x 00:08:08.018 [2024-12-01 14:48:41.074405] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:08.018 14:48:41 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:08.018 14:48:41 -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:08:08.018 14:48:41 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:08.018 14:48:41 -- common/autotest_common.sh@10 -- # set +x 00:08:08.278 Malloc1 00:08:08.278 14:48:41 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:08.278 14:48:41 -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:08:08.278 14:48:41 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:08.278 14:48:41 -- common/autotest_common.sh@10 -- # set +x 00:08:08.278 14:48:41 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:08.278 14:48:41 -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:08:08.278 14:48:41 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:08.278 14:48:41 -- common/autotest_common.sh@10 -- # set +x 00:08:08.278 14:48:41 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:08.278 14:48:41 -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:08.278 14:48:41 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:08.278 14:48:41 -- common/autotest_common.sh@10 -- # set +x 00:08:08.278 [2024-12-01 14:48:41.251315] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:08.278 14:48:41 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:08.278 14:48:41 -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:08:08.278 14:48:41 -- common/autotest_common.sh@1367 -- # local bdev_name=Malloc1 00:08:08.278 14:48:41 -- common/autotest_common.sh@1368 -- # local bdev_info 00:08:08.278 14:48:41 -- common/autotest_common.sh@1369 -- # local bs 00:08:08.278 14:48:41 -- common/autotest_common.sh@1370 -- # local nb 00:08:08.278 14:48:41 -- common/autotest_common.sh@1371 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:08:08.278 14:48:41 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:08.278 14:48:41 -- common/autotest_common.sh@10 -- # set +x 00:08:08.278 14:48:41 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:08.278 14:48:41 -- common/autotest_common.sh@1371 -- # bdev_info='[ 00:08:08.278 { 00:08:08.278 "aliases": [ 00:08:08.278 "ebe39a30-d654-40c1-bff2-9d758c877a49" 00:08:08.278 ], 00:08:08.278 "assigned_rate_limits": { 00:08:08.278 "r_mbytes_per_sec": 0, 00:08:08.278 "rw_ios_per_sec": 0, 00:08:08.278 "rw_mbytes_per_sec": 0, 00:08:08.278 "w_mbytes_per_sec": 0 00:08:08.278 }, 00:08:08.278 "block_size": 512, 00:08:08.278 "claim_type": "exclusive_write", 00:08:08.278 "claimed": true, 00:08:08.278 "driver_specific": {}, 00:08:08.278 "memory_domains": [ 00:08:08.278 { 00:08:08.278 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:08.278 "dma_device_type": 2 00:08:08.278 } 00:08:08.278 ], 00:08:08.278 "name": "Malloc1", 00:08:08.278 "num_blocks": 1048576, 00:08:08.278 "product_name": "Malloc disk", 00:08:08.278 "supported_io_types": { 00:08:08.278 "abort": true, 00:08:08.278 "compare": false, 00:08:08.278 "compare_and_write": false, 00:08:08.278 "flush": true, 00:08:08.278 "nvme_admin": false, 00:08:08.278 "nvme_io": false, 00:08:08.278 "read": true, 00:08:08.278 "reset": true, 00:08:08.278 "unmap": true, 00:08:08.278 "write": true, 00:08:08.278 "write_zeroes": true 00:08:08.278 }, 00:08:08.278 "uuid": "ebe39a30-d654-40c1-bff2-9d758c877a49", 00:08:08.278 "zoned": false 00:08:08.278 } 00:08:08.278 ]' 00:08:08.278 14:48:41 -- common/autotest_common.sh@1372 -- # jq '.[] .block_size' 00:08:08.278 14:48:41 -- common/autotest_common.sh@1372 -- # bs=512 00:08:08.278 14:48:41 -- common/autotest_common.sh@1373 -- # jq '.[] .num_blocks' 00:08:08.278 14:48:41 -- common/autotest_common.sh@1373 -- # nb=1048576 00:08:08.278 14:48:41 -- common/autotest_common.sh@1376 -- # bdev_size=512 00:08:08.278 14:48:41 -- common/autotest_common.sh@1377 -- # echo 512 00:08:08.278 14:48:41 -- target/filesystem.sh@58 -- # malloc_size=536870912 00:08:08.278 14:48:41 -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:2d843004-a791-47f3-8dd7-3d04462c368b --hostid=2d843004-a791-47f3-8dd7-3d04462c368b -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:08:08.538 14:48:41 -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:08:08.538 14:48:41 -- common/autotest_common.sh@1187 -- # local i=0 00:08:08.538 14:48:41 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:08:08.538 14:48:41 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:08:08.538 14:48:41 -- common/autotest_common.sh@1194 -- # sleep 2 00:08:10.441 14:48:43 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:08:10.441 14:48:43 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:08:10.441 14:48:43 -- common/autotest_common.sh@1196 -- # grep -c SPDKISFASTANDAWESOME 00:08:10.699 14:48:43 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:08:10.699 14:48:43 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:08:10.700 14:48:43 -- common/autotest_common.sh@1197 -- # return 0 00:08:10.700 14:48:43 -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:08:10.700 14:48:43 -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:08:10.700 14:48:43 -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:08:10.700 14:48:43 -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:08:10.700 14:48:43 -- setup/common.sh@76 -- # local dev=nvme0n1 00:08:10.700 14:48:43 -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:08:10.700 14:48:43 -- setup/common.sh@80 -- # echo 536870912 00:08:10.700 14:48:43 -- target/filesystem.sh@64 -- # nvme_size=536870912 00:08:10.700 14:48:43 -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:08:10.700 14:48:43 -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:08:10.700 14:48:43 -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:08:10.700 14:48:43 -- target/filesystem.sh@69 -- # partprobe 00:08:10.700 14:48:43 -- target/filesystem.sh@70 -- # sleep 1 00:08:11.636 14:48:44 -- target/filesystem.sh@76 -- # '[' 0 -eq 0 ']' 00:08:11.636 14:48:44 -- target/filesystem.sh@77 -- # run_test filesystem_ext4 nvmf_filesystem_create ext4 nvme0n1 00:08:11.636 14:48:44 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:08:11.636 14:48:44 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:11.636 14:48:44 -- common/autotest_common.sh@10 -- # set +x 00:08:11.636 ************************************ 00:08:11.636 START TEST filesystem_ext4 00:08:11.636 ************************************ 00:08:11.636 14:48:44 -- common/autotest_common.sh@1114 -- # nvmf_filesystem_create ext4 nvme0n1 00:08:11.636 14:48:44 -- target/filesystem.sh@18 -- # fstype=ext4 00:08:11.636 14:48:44 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:08:11.636 14:48:44 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:08:11.636 14:48:44 -- common/autotest_common.sh@912 -- # local fstype=ext4 00:08:11.636 14:48:44 -- common/autotest_common.sh@913 -- # local dev_name=/dev/nvme0n1p1 00:08:11.636 14:48:44 -- common/autotest_common.sh@914 -- # local i=0 00:08:11.636 14:48:44 -- common/autotest_common.sh@915 -- # local force 00:08:11.636 14:48:44 -- common/autotest_common.sh@917 -- # '[' ext4 = ext4 ']' 00:08:11.636 14:48:44 -- common/autotest_common.sh@918 -- # force=-F 00:08:11.636 14:48:44 -- common/autotest_common.sh@923 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:08:11.636 mke2fs 1.47.0 (5-Feb-2023) 00:08:11.895 Discarding device blocks: 0/522240 done 00:08:11.895 Creating filesystem with 522240 1k blocks and 130560 inodes 00:08:11.895 Filesystem UUID: 94bd9438-f4be-4954-a51f-307e9a370aa0 00:08:11.895 Superblock backups stored on blocks: 00:08:11.895 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:08:11.895 00:08:11.895 Allocating group tables: 0/64 done 00:08:11.895 Writing inode tables: 0/64 done 00:08:11.895 Creating journal (8192 blocks): done 00:08:11.895 Writing superblocks and filesystem accounting information: 0/64 done 00:08:11.895 00:08:11.895 14:48:44 -- common/autotest_common.sh@931 -- # return 0 00:08:11.895 14:48:44 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:08:18.453 14:48:50 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:08:18.453 14:48:50 -- target/filesystem.sh@25 -- # sync 00:08:18.453 14:48:50 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:08:18.453 14:48:50 -- target/filesystem.sh@27 -- # sync 00:08:18.453 14:48:50 -- target/filesystem.sh@29 -- # i=0 00:08:18.453 14:48:50 -- target/filesystem.sh@30 -- # umount /mnt/device 00:08:18.453 14:48:50 -- target/filesystem.sh@37 -- # kill -0 72550 00:08:18.453 14:48:50 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:08:18.453 14:48:50 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:08:18.453 14:48:50 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:08:18.453 14:48:50 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:08:18.453 00:08:18.453 real 0m5.763s 00:08:18.453 user 0m0.024s 00:08:18.453 sys 0m0.065s 00:08:18.453 14:48:50 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:18.453 14:48:50 -- common/autotest_common.sh@10 -- # set +x 00:08:18.453 ************************************ 00:08:18.453 END TEST filesystem_ext4 00:08:18.453 ************************************ 00:08:18.453 14:48:50 -- target/filesystem.sh@78 -- # run_test filesystem_btrfs nvmf_filesystem_create btrfs nvme0n1 00:08:18.453 14:48:50 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:08:18.453 14:48:50 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:18.453 14:48:50 -- common/autotest_common.sh@10 -- # set +x 00:08:18.453 ************************************ 00:08:18.453 START TEST filesystem_btrfs 00:08:18.453 ************************************ 00:08:18.453 14:48:50 -- common/autotest_common.sh@1114 -- # nvmf_filesystem_create btrfs nvme0n1 00:08:18.453 14:48:50 -- target/filesystem.sh@18 -- # fstype=btrfs 00:08:18.453 14:48:50 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:08:18.453 14:48:50 -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:08:18.453 14:48:50 -- common/autotest_common.sh@912 -- # local fstype=btrfs 00:08:18.453 14:48:50 -- common/autotest_common.sh@913 -- # local dev_name=/dev/nvme0n1p1 00:08:18.453 14:48:50 -- common/autotest_common.sh@914 -- # local i=0 00:08:18.453 14:48:50 -- common/autotest_common.sh@915 -- # local force 00:08:18.453 14:48:50 -- common/autotest_common.sh@917 -- # '[' btrfs = ext4 ']' 00:08:18.453 14:48:50 -- common/autotest_common.sh@920 -- # force=-f 00:08:18.453 14:48:50 -- common/autotest_common.sh@923 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:08:18.453 btrfs-progs v6.8.1 00:08:18.453 See https://btrfs.readthedocs.io for more information. 00:08:18.453 00:08:18.453 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:08:18.453 NOTE: several default settings have changed in version 5.15, please make sure 00:08:18.453 this does not affect your deployments: 00:08:18.453 - DUP for metadata (-m dup) 00:08:18.453 - enabled no-holes (-O no-holes) 00:08:18.453 - enabled free-space-tree (-R free-space-tree) 00:08:18.453 00:08:18.454 Label: (null) 00:08:18.454 UUID: 10426cd3-83d0-44c0-97f8-a439f5ea372a 00:08:18.454 Node size: 16384 00:08:18.454 Sector size: 4096 (CPU page size: 4096) 00:08:18.454 Filesystem size: 510.00MiB 00:08:18.454 Block group profiles: 00:08:18.454 Data: single 8.00MiB 00:08:18.454 Metadata: DUP 32.00MiB 00:08:18.454 System: DUP 8.00MiB 00:08:18.454 SSD detected: yes 00:08:18.454 Zoned device: no 00:08:18.454 Features: extref, skinny-metadata, no-holes, free-space-tree 00:08:18.454 Checksum: crc32c 00:08:18.454 Number of devices: 1 00:08:18.454 Devices: 00:08:18.454 ID SIZE PATH 00:08:18.454 1 510.00MiB /dev/nvme0n1p1 00:08:18.454 00:08:18.454 14:48:50 -- common/autotest_common.sh@931 -- # return 0 00:08:18.454 14:48:50 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:08:18.454 14:48:50 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:08:18.454 14:48:50 -- target/filesystem.sh@25 -- # sync 00:08:18.454 14:48:50 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:08:18.454 14:48:50 -- target/filesystem.sh@27 -- # sync 00:08:18.454 14:48:50 -- target/filesystem.sh@29 -- # i=0 00:08:18.454 14:48:50 -- target/filesystem.sh@30 -- # umount /mnt/device 00:08:18.454 14:48:50 -- target/filesystem.sh@37 -- # kill -0 72550 00:08:18.454 14:48:50 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:08:18.454 14:48:50 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:08:18.454 14:48:50 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:08:18.454 14:48:50 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:08:18.454 00:08:18.454 real 0m0.278s 00:08:18.454 user 0m0.016s 00:08:18.454 sys 0m0.068s 00:08:18.454 14:48:50 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:18.454 14:48:50 -- common/autotest_common.sh@10 -- # set +x 00:08:18.454 ************************************ 00:08:18.454 END TEST filesystem_btrfs 00:08:18.454 ************************************ 00:08:18.454 14:48:50 -- target/filesystem.sh@79 -- # run_test filesystem_xfs nvmf_filesystem_create xfs nvme0n1 00:08:18.454 14:48:50 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:08:18.454 14:48:50 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:18.454 14:48:50 -- common/autotest_common.sh@10 -- # set +x 00:08:18.454 ************************************ 00:08:18.454 START TEST filesystem_xfs 00:08:18.454 ************************************ 00:08:18.454 14:48:50 -- common/autotest_common.sh@1114 -- # nvmf_filesystem_create xfs nvme0n1 00:08:18.454 14:48:50 -- target/filesystem.sh@18 -- # fstype=xfs 00:08:18.454 14:48:50 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:08:18.454 14:48:50 -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:08:18.454 14:48:50 -- common/autotest_common.sh@912 -- # local fstype=xfs 00:08:18.454 14:48:50 -- common/autotest_common.sh@913 -- # local dev_name=/dev/nvme0n1p1 00:08:18.454 14:48:50 -- common/autotest_common.sh@914 -- # local i=0 00:08:18.454 14:48:50 -- common/autotest_common.sh@915 -- # local force 00:08:18.454 14:48:50 -- common/autotest_common.sh@917 -- # '[' xfs = ext4 ']' 00:08:18.454 14:48:50 -- common/autotest_common.sh@920 -- # force=-f 00:08:18.454 14:48:50 -- common/autotest_common.sh@923 -- # mkfs.xfs -f /dev/nvme0n1p1 00:08:18.454 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:08:18.454 = sectsz=512 attr=2, projid32bit=1 00:08:18.454 = crc=1 finobt=1, sparse=1, rmapbt=0 00:08:18.454 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:08:18.454 data = bsize=4096 blocks=130560, imaxpct=25 00:08:18.454 = sunit=0 swidth=0 blks 00:08:18.454 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:08:18.454 log =internal log bsize=4096 blocks=16384, version=2 00:08:18.454 = sectsz=512 sunit=0 blks, lazy-count=1 00:08:18.454 realtime =none extsz=4096 blocks=0, rtextents=0 00:08:18.713 Discarding blocks...Done. 00:08:18.713 14:48:51 -- common/autotest_common.sh@931 -- # return 0 00:08:18.713 14:48:51 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:08:21.246 14:48:53 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:08:21.246 14:48:53 -- target/filesystem.sh@25 -- # sync 00:08:21.246 14:48:53 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:08:21.246 14:48:53 -- target/filesystem.sh@27 -- # sync 00:08:21.246 14:48:53 -- target/filesystem.sh@29 -- # i=0 00:08:21.246 14:48:53 -- target/filesystem.sh@30 -- # umount /mnt/device 00:08:21.246 14:48:53 -- target/filesystem.sh@37 -- # kill -0 72550 00:08:21.246 14:48:53 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:08:21.246 14:48:53 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:08:21.246 14:48:53 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:08:21.246 14:48:53 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:08:21.246 00:08:21.246 real 0m3.125s 00:08:21.246 user 0m0.022s 00:08:21.246 sys 0m0.060s 00:08:21.246 14:48:53 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:21.246 ************************************ 00:08:21.246 END TEST filesystem_xfs 00:08:21.246 ************************************ 00:08:21.246 14:48:53 -- common/autotest_common.sh@10 -- # set +x 00:08:21.246 14:48:54 -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:08:21.246 14:48:54 -- target/filesystem.sh@93 -- # sync 00:08:21.246 14:48:54 -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:08:21.246 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:21.246 14:48:54 -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:08:21.246 14:48:54 -- common/autotest_common.sh@1208 -- # local i=0 00:08:21.246 14:48:54 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:08:21.246 14:48:54 -- common/autotest_common.sh@1209 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:21.246 14:48:54 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:08:21.246 14:48:54 -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:21.246 14:48:54 -- common/autotest_common.sh@1220 -- # return 0 00:08:21.246 14:48:54 -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:21.246 14:48:54 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:21.246 14:48:54 -- common/autotest_common.sh@10 -- # set +x 00:08:21.246 14:48:54 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:21.246 14:48:54 -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:08:21.246 14:48:54 -- target/filesystem.sh@101 -- # killprocess 72550 00:08:21.246 14:48:54 -- common/autotest_common.sh@936 -- # '[' -z 72550 ']' 00:08:21.246 14:48:54 -- common/autotest_common.sh@940 -- # kill -0 72550 00:08:21.246 14:48:54 -- common/autotest_common.sh@941 -- # uname 00:08:21.246 14:48:54 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:08:21.246 14:48:54 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 72550 00:08:21.246 killing process with pid 72550 00:08:21.246 14:48:54 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:08:21.246 14:48:54 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:08:21.246 14:48:54 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 72550' 00:08:21.246 14:48:54 -- common/autotest_common.sh@955 -- # kill 72550 00:08:21.246 14:48:54 -- common/autotest_common.sh@960 -- # wait 72550 00:08:21.506 ************************************ 00:08:21.506 END TEST nvmf_filesystem_no_in_capsule 00:08:21.506 ************************************ 00:08:21.506 14:48:54 -- target/filesystem.sh@102 -- # nvmfpid= 00:08:21.506 00:08:21.506 real 0m14.605s 00:08:21.506 user 0m56.514s 00:08:21.506 sys 0m1.618s 00:08:21.506 14:48:54 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:21.506 14:48:54 -- common/autotest_common.sh@10 -- # set +x 00:08:21.506 14:48:54 -- target/filesystem.sh@106 -- # run_test nvmf_filesystem_in_capsule nvmf_filesystem_part 4096 00:08:21.506 14:48:54 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:08:21.506 14:48:54 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:21.506 14:48:54 -- common/autotest_common.sh@10 -- # set +x 00:08:21.506 ************************************ 00:08:21.506 START TEST nvmf_filesystem_in_capsule 00:08:21.506 ************************************ 00:08:21.506 14:48:54 -- common/autotest_common.sh@1114 -- # nvmf_filesystem_part 4096 00:08:21.506 14:48:54 -- target/filesystem.sh@47 -- # in_capsule=4096 00:08:21.506 14:48:54 -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:08:21.506 14:48:54 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:08:21.506 14:48:54 -- common/autotest_common.sh@722 -- # xtrace_disable 00:08:21.506 14:48:54 -- common/autotest_common.sh@10 -- # set +x 00:08:21.765 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:21.765 14:48:54 -- nvmf/common.sh@469 -- # nvmfpid=72922 00:08:21.765 14:48:54 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:08:21.765 14:48:54 -- nvmf/common.sh@470 -- # waitforlisten 72922 00:08:21.765 14:48:54 -- common/autotest_common.sh@829 -- # '[' -z 72922 ']' 00:08:21.765 14:48:54 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:21.765 14:48:54 -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:21.765 14:48:54 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:21.765 14:48:54 -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:21.765 14:48:54 -- common/autotest_common.sh@10 -- # set +x 00:08:21.765 [2024-12-01 14:48:54.670989] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:08:21.765 [2024-12-01 14:48:54.671067] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:21.765 [2024-12-01 14:48:54.806812] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:21.765 [2024-12-01 14:48:54.852584] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:08:21.765 [2024-12-01 14:48:54.853008] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:21.765 [2024-12-01 14:48:54.853163] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:21.765 [2024-12-01 14:48:54.853290] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:21.765 [2024-12-01 14:48:54.853403] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:08:21.765 [2024-12-01 14:48:54.853579] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:08:21.765 [2024-12-01 14:48:54.854237] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:08:21.765 [2024-12-01 14:48:54.854267] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:22.700 14:48:55 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:22.700 14:48:55 -- common/autotest_common.sh@862 -- # return 0 00:08:22.700 14:48:55 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:08:22.700 14:48:55 -- common/autotest_common.sh@728 -- # xtrace_disable 00:08:22.700 14:48:55 -- common/autotest_common.sh@10 -- # set +x 00:08:22.700 14:48:55 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:22.700 14:48:55 -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:08:22.700 14:48:55 -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 4096 00:08:22.700 14:48:55 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:22.700 14:48:55 -- common/autotest_common.sh@10 -- # set +x 00:08:22.700 [2024-12-01 14:48:55.632979] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:22.700 14:48:55 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:22.700 14:48:55 -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:08:22.700 14:48:55 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:22.700 14:48:55 -- common/autotest_common.sh@10 -- # set +x 00:08:22.700 Malloc1 00:08:22.700 14:48:55 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:22.700 14:48:55 -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:08:22.700 14:48:55 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:22.700 14:48:55 -- common/autotest_common.sh@10 -- # set +x 00:08:22.700 14:48:55 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:22.700 14:48:55 -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:08:22.700 14:48:55 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:22.700 14:48:55 -- common/autotest_common.sh@10 -- # set +x 00:08:22.959 14:48:55 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:22.959 14:48:55 -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:22.959 14:48:55 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:22.959 14:48:55 -- common/autotest_common.sh@10 -- # set +x 00:08:22.959 [2024-12-01 14:48:55.818970] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:22.959 14:48:55 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:22.959 14:48:55 -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:08:22.959 14:48:55 -- common/autotest_common.sh@1367 -- # local bdev_name=Malloc1 00:08:22.959 14:48:55 -- common/autotest_common.sh@1368 -- # local bdev_info 00:08:22.959 14:48:55 -- common/autotest_common.sh@1369 -- # local bs 00:08:22.959 14:48:55 -- common/autotest_common.sh@1370 -- # local nb 00:08:22.959 14:48:55 -- common/autotest_common.sh@1371 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:08:22.959 14:48:55 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:22.959 14:48:55 -- common/autotest_common.sh@10 -- # set +x 00:08:22.959 14:48:55 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:22.959 14:48:55 -- common/autotest_common.sh@1371 -- # bdev_info='[ 00:08:22.959 { 00:08:22.959 "aliases": [ 00:08:22.959 "4ae65bdd-f055-4745-a0d8-07785c65f043" 00:08:22.959 ], 00:08:22.959 "assigned_rate_limits": { 00:08:22.959 "r_mbytes_per_sec": 0, 00:08:22.959 "rw_ios_per_sec": 0, 00:08:22.959 "rw_mbytes_per_sec": 0, 00:08:22.959 "w_mbytes_per_sec": 0 00:08:22.959 }, 00:08:22.959 "block_size": 512, 00:08:22.959 "claim_type": "exclusive_write", 00:08:22.959 "claimed": true, 00:08:22.959 "driver_specific": {}, 00:08:22.959 "memory_domains": [ 00:08:22.959 { 00:08:22.959 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:22.959 "dma_device_type": 2 00:08:22.959 } 00:08:22.959 ], 00:08:22.959 "name": "Malloc1", 00:08:22.959 "num_blocks": 1048576, 00:08:22.959 "product_name": "Malloc disk", 00:08:22.959 "supported_io_types": { 00:08:22.959 "abort": true, 00:08:22.959 "compare": false, 00:08:22.959 "compare_and_write": false, 00:08:22.959 "flush": true, 00:08:22.959 "nvme_admin": false, 00:08:22.959 "nvme_io": false, 00:08:22.959 "read": true, 00:08:22.959 "reset": true, 00:08:22.959 "unmap": true, 00:08:22.959 "write": true, 00:08:22.959 "write_zeroes": true 00:08:22.959 }, 00:08:22.959 "uuid": "4ae65bdd-f055-4745-a0d8-07785c65f043", 00:08:22.959 "zoned": false 00:08:22.959 } 00:08:22.959 ]' 00:08:22.959 14:48:55 -- common/autotest_common.sh@1372 -- # jq '.[] .block_size' 00:08:22.959 14:48:55 -- common/autotest_common.sh@1372 -- # bs=512 00:08:22.959 14:48:55 -- common/autotest_common.sh@1373 -- # jq '.[] .num_blocks' 00:08:22.959 14:48:55 -- common/autotest_common.sh@1373 -- # nb=1048576 00:08:22.959 14:48:55 -- common/autotest_common.sh@1376 -- # bdev_size=512 00:08:22.959 14:48:55 -- common/autotest_common.sh@1377 -- # echo 512 00:08:22.959 14:48:55 -- target/filesystem.sh@58 -- # malloc_size=536870912 00:08:22.959 14:48:55 -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:2d843004-a791-47f3-8dd7-3d04462c368b --hostid=2d843004-a791-47f3-8dd7-3d04462c368b -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:08:23.218 14:48:56 -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:08:23.218 14:48:56 -- common/autotest_common.sh@1187 -- # local i=0 00:08:23.218 14:48:56 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:08:23.218 14:48:56 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:08:23.218 14:48:56 -- common/autotest_common.sh@1194 -- # sleep 2 00:08:25.122 14:48:58 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:08:25.122 14:48:58 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:08:25.122 14:48:58 -- common/autotest_common.sh@1196 -- # grep -c SPDKISFASTANDAWESOME 00:08:25.122 14:48:58 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:08:25.122 14:48:58 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:08:25.122 14:48:58 -- common/autotest_common.sh@1197 -- # return 0 00:08:25.122 14:48:58 -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:08:25.122 14:48:58 -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:08:25.122 14:48:58 -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:08:25.122 14:48:58 -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:08:25.122 14:48:58 -- setup/common.sh@76 -- # local dev=nvme0n1 00:08:25.122 14:48:58 -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:08:25.122 14:48:58 -- setup/common.sh@80 -- # echo 536870912 00:08:25.122 14:48:58 -- target/filesystem.sh@64 -- # nvme_size=536870912 00:08:25.122 14:48:58 -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:08:25.122 14:48:58 -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:08:25.122 14:48:58 -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:08:25.122 14:48:58 -- target/filesystem.sh@69 -- # partprobe 00:08:25.381 14:48:58 -- target/filesystem.sh@70 -- # sleep 1 00:08:26.318 14:48:59 -- target/filesystem.sh@76 -- # '[' 4096 -eq 0 ']' 00:08:26.318 14:48:59 -- target/filesystem.sh@81 -- # run_test filesystem_in_capsule_ext4 nvmf_filesystem_create ext4 nvme0n1 00:08:26.318 14:48:59 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:08:26.318 14:48:59 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:26.318 14:48:59 -- common/autotest_common.sh@10 -- # set +x 00:08:26.318 ************************************ 00:08:26.318 START TEST filesystem_in_capsule_ext4 00:08:26.318 ************************************ 00:08:26.318 14:48:59 -- common/autotest_common.sh@1114 -- # nvmf_filesystem_create ext4 nvme0n1 00:08:26.318 14:48:59 -- target/filesystem.sh@18 -- # fstype=ext4 00:08:26.318 14:48:59 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:08:26.318 14:48:59 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:08:26.318 14:48:59 -- common/autotest_common.sh@912 -- # local fstype=ext4 00:08:26.318 14:48:59 -- common/autotest_common.sh@913 -- # local dev_name=/dev/nvme0n1p1 00:08:26.318 14:48:59 -- common/autotest_common.sh@914 -- # local i=0 00:08:26.318 14:48:59 -- common/autotest_common.sh@915 -- # local force 00:08:26.318 14:48:59 -- common/autotest_common.sh@917 -- # '[' ext4 = ext4 ']' 00:08:26.318 14:48:59 -- common/autotest_common.sh@918 -- # force=-F 00:08:26.318 14:48:59 -- common/autotest_common.sh@923 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:08:26.318 mke2fs 1.47.0 (5-Feb-2023) 00:08:26.318 Discarding device blocks: 0/522240 done 00:08:26.318 Creating filesystem with 522240 1k blocks and 130560 inodes 00:08:26.318 Filesystem UUID: 00563e11-50a3-44b6-815c-34de39e95b54 00:08:26.318 Superblock backups stored on blocks: 00:08:26.318 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:08:26.318 00:08:26.318 Allocating group tables: 0/64 done 00:08:26.318 Writing inode tables: 0/64 done 00:08:26.576 Creating journal (8192 blocks): done 00:08:26.576 Writing superblocks and filesystem accounting information: 0/64 done 00:08:26.576 00:08:26.576 14:48:59 -- common/autotest_common.sh@931 -- # return 0 00:08:26.576 14:48:59 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:08:31.847 14:49:04 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:08:31.847 14:49:04 -- target/filesystem.sh@25 -- # sync 00:08:31.847 14:49:04 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:08:31.847 14:49:04 -- target/filesystem.sh@27 -- # sync 00:08:31.847 14:49:04 -- target/filesystem.sh@29 -- # i=0 00:08:31.847 14:49:04 -- target/filesystem.sh@30 -- # umount /mnt/device 00:08:31.847 14:49:04 -- target/filesystem.sh@37 -- # kill -0 72922 00:08:31.847 14:49:04 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:08:31.847 14:49:04 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:08:31.847 14:49:04 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:08:31.847 14:49:04 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:08:31.847 ************************************ 00:08:31.847 END TEST filesystem_in_capsule_ext4 00:08:31.847 ************************************ 00:08:31.847 00:08:31.847 real 0m5.595s 00:08:31.847 user 0m0.025s 00:08:31.847 sys 0m0.065s 00:08:31.847 14:49:04 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:31.847 14:49:04 -- common/autotest_common.sh@10 -- # set +x 00:08:31.847 14:49:04 -- target/filesystem.sh@82 -- # run_test filesystem_in_capsule_btrfs nvmf_filesystem_create btrfs nvme0n1 00:08:31.848 14:49:04 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:08:31.848 14:49:04 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:31.848 14:49:04 -- common/autotest_common.sh@10 -- # set +x 00:08:31.848 ************************************ 00:08:31.848 START TEST filesystem_in_capsule_btrfs 00:08:31.848 ************************************ 00:08:31.848 14:49:04 -- common/autotest_common.sh@1114 -- # nvmf_filesystem_create btrfs nvme0n1 00:08:31.848 14:49:04 -- target/filesystem.sh@18 -- # fstype=btrfs 00:08:31.848 14:49:04 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:08:31.848 14:49:04 -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:08:31.848 14:49:04 -- common/autotest_common.sh@912 -- # local fstype=btrfs 00:08:31.848 14:49:04 -- common/autotest_common.sh@913 -- # local dev_name=/dev/nvme0n1p1 00:08:31.848 14:49:04 -- common/autotest_common.sh@914 -- # local i=0 00:08:31.848 14:49:04 -- common/autotest_common.sh@915 -- # local force 00:08:31.848 14:49:04 -- common/autotest_common.sh@917 -- # '[' btrfs = ext4 ']' 00:08:31.848 14:49:04 -- common/autotest_common.sh@920 -- # force=-f 00:08:31.848 14:49:04 -- common/autotest_common.sh@923 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:08:32.119 btrfs-progs v6.8.1 00:08:32.119 See https://btrfs.readthedocs.io for more information. 00:08:32.119 00:08:32.119 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:08:32.119 NOTE: several default settings have changed in version 5.15, please make sure 00:08:32.119 this does not affect your deployments: 00:08:32.119 - DUP for metadata (-m dup) 00:08:32.119 - enabled no-holes (-O no-holes) 00:08:32.119 - enabled free-space-tree (-R free-space-tree) 00:08:32.119 00:08:32.119 Label: (null) 00:08:32.119 UUID: d4fa1aa1-0f4b-4095-b108-e3fcf61e7934 00:08:32.119 Node size: 16384 00:08:32.119 Sector size: 4096 (CPU page size: 4096) 00:08:32.119 Filesystem size: 510.00MiB 00:08:32.119 Block group profiles: 00:08:32.119 Data: single 8.00MiB 00:08:32.119 Metadata: DUP 32.00MiB 00:08:32.119 System: DUP 8.00MiB 00:08:32.119 SSD detected: yes 00:08:32.119 Zoned device: no 00:08:32.119 Features: extref, skinny-metadata, no-holes, free-space-tree 00:08:32.119 Checksum: crc32c 00:08:32.120 Number of devices: 1 00:08:32.120 Devices: 00:08:32.120 ID SIZE PATH 00:08:32.120 1 510.00MiB /dev/nvme0n1p1 00:08:32.120 00:08:32.120 14:49:05 -- common/autotest_common.sh@931 -- # return 0 00:08:32.120 14:49:05 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:08:32.120 14:49:05 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:08:32.120 14:49:05 -- target/filesystem.sh@25 -- # sync 00:08:32.120 14:49:05 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:08:32.120 14:49:05 -- target/filesystem.sh@27 -- # sync 00:08:32.120 14:49:05 -- target/filesystem.sh@29 -- # i=0 00:08:32.120 14:49:05 -- target/filesystem.sh@30 -- # umount /mnt/device 00:08:32.120 14:49:05 -- target/filesystem.sh@37 -- # kill -0 72922 00:08:32.120 14:49:05 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:08:32.120 14:49:05 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:08:32.385 14:49:05 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:08:32.385 14:49:05 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:08:32.385 ************************************ 00:08:32.385 END TEST filesystem_in_capsule_btrfs 00:08:32.385 ************************************ 00:08:32.385 00:08:32.385 real 0m0.320s 00:08:32.385 user 0m0.020s 00:08:32.385 sys 0m0.064s 00:08:32.385 14:49:05 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:32.385 14:49:05 -- common/autotest_common.sh@10 -- # set +x 00:08:32.385 14:49:05 -- target/filesystem.sh@83 -- # run_test filesystem_in_capsule_xfs nvmf_filesystem_create xfs nvme0n1 00:08:32.385 14:49:05 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:08:32.385 14:49:05 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:32.385 14:49:05 -- common/autotest_common.sh@10 -- # set +x 00:08:32.385 ************************************ 00:08:32.385 START TEST filesystem_in_capsule_xfs 00:08:32.385 ************************************ 00:08:32.385 14:49:05 -- common/autotest_common.sh@1114 -- # nvmf_filesystem_create xfs nvme0n1 00:08:32.385 14:49:05 -- target/filesystem.sh@18 -- # fstype=xfs 00:08:32.385 14:49:05 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:08:32.385 14:49:05 -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:08:32.385 14:49:05 -- common/autotest_common.sh@912 -- # local fstype=xfs 00:08:32.385 14:49:05 -- common/autotest_common.sh@913 -- # local dev_name=/dev/nvme0n1p1 00:08:32.385 14:49:05 -- common/autotest_common.sh@914 -- # local i=0 00:08:32.385 14:49:05 -- common/autotest_common.sh@915 -- # local force 00:08:32.385 14:49:05 -- common/autotest_common.sh@917 -- # '[' xfs = ext4 ']' 00:08:32.385 14:49:05 -- common/autotest_common.sh@920 -- # force=-f 00:08:32.385 14:49:05 -- common/autotest_common.sh@923 -- # mkfs.xfs -f /dev/nvme0n1p1 00:08:32.385 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:08:32.385 = sectsz=512 attr=2, projid32bit=1 00:08:32.385 = crc=1 finobt=1, sparse=1, rmapbt=0 00:08:32.385 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:08:32.385 data = bsize=4096 blocks=130560, imaxpct=25 00:08:32.385 = sunit=0 swidth=0 blks 00:08:32.385 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:08:32.385 log =internal log bsize=4096 blocks=16384, version=2 00:08:32.385 = sectsz=512 sunit=0 blks, lazy-count=1 00:08:32.385 realtime =none extsz=4096 blocks=0, rtextents=0 00:08:32.951 Discarding blocks...Done. 00:08:32.951 14:49:06 -- common/autotest_common.sh@931 -- # return 0 00:08:32.951 14:49:06 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:08:34.856 14:49:07 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:08:34.856 14:49:07 -- target/filesystem.sh@25 -- # sync 00:08:34.856 14:49:07 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:08:34.856 14:49:07 -- target/filesystem.sh@27 -- # sync 00:08:34.856 14:49:07 -- target/filesystem.sh@29 -- # i=0 00:08:34.856 14:49:07 -- target/filesystem.sh@30 -- # umount /mnt/device 00:08:34.856 14:49:07 -- target/filesystem.sh@37 -- # kill -0 72922 00:08:34.856 14:49:07 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:08:34.856 14:49:07 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:08:34.856 14:49:07 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:08:34.856 14:49:07 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:08:34.856 ************************************ 00:08:34.856 END TEST filesystem_in_capsule_xfs 00:08:34.856 ************************************ 00:08:34.856 00:08:34.856 real 0m2.622s 00:08:34.856 user 0m0.033s 00:08:34.856 sys 0m0.051s 00:08:34.856 14:49:07 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:34.856 14:49:07 -- common/autotest_common.sh@10 -- # set +x 00:08:34.856 14:49:07 -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:08:34.856 14:49:07 -- target/filesystem.sh@93 -- # sync 00:08:35.115 14:49:07 -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:08:35.115 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:35.115 14:49:08 -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:08:35.115 14:49:08 -- common/autotest_common.sh@1208 -- # local i=0 00:08:35.115 14:49:08 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:08:35.115 14:49:08 -- common/autotest_common.sh@1209 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:35.115 14:49:08 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:08:35.115 14:49:08 -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:35.115 14:49:08 -- common/autotest_common.sh@1220 -- # return 0 00:08:35.115 14:49:08 -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:35.115 14:49:08 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:35.115 14:49:08 -- common/autotest_common.sh@10 -- # set +x 00:08:35.115 14:49:08 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:35.115 14:49:08 -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:08:35.115 14:49:08 -- target/filesystem.sh@101 -- # killprocess 72922 00:08:35.115 14:49:08 -- common/autotest_common.sh@936 -- # '[' -z 72922 ']' 00:08:35.115 14:49:08 -- common/autotest_common.sh@940 -- # kill -0 72922 00:08:35.115 14:49:08 -- common/autotest_common.sh@941 -- # uname 00:08:35.115 14:49:08 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:08:35.115 14:49:08 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 72922 00:08:35.115 killing process with pid 72922 00:08:35.115 14:49:08 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:08:35.115 14:49:08 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:08:35.115 14:49:08 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 72922' 00:08:35.115 14:49:08 -- common/autotest_common.sh@955 -- # kill 72922 00:08:35.115 14:49:08 -- common/autotest_common.sh@960 -- # wait 72922 00:08:35.683 ************************************ 00:08:35.683 END TEST nvmf_filesystem_in_capsule 00:08:35.683 ************************************ 00:08:35.683 14:49:08 -- target/filesystem.sh@102 -- # nvmfpid= 00:08:35.683 00:08:35.683 real 0m13.957s 00:08:35.683 user 0m54.031s 00:08:35.683 sys 0m1.517s 00:08:35.683 14:49:08 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:35.683 14:49:08 -- common/autotest_common.sh@10 -- # set +x 00:08:35.683 14:49:08 -- target/filesystem.sh@108 -- # nvmftestfini 00:08:35.683 14:49:08 -- nvmf/common.sh@476 -- # nvmfcleanup 00:08:35.683 14:49:08 -- nvmf/common.sh@116 -- # sync 00:08:35.683 14:49:08 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:08:35.683 14:49:08 -- nvmf/common.sh@119 -- # set +e 00:08:35.683 14:49:08 -- nvmf/common.sh@120 -- # for i in {1..20} 00:08:35.683 14:49:08 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:08:35.683 rmmod nvme_tcp 00:08:35.683 rmmod nvme_fabrics 00:08:35.683 rmmod nvme_keyring 00:08:35.683 14:49:08 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:08:35.683 14:49:08 -- nvmf/common.sh@123 -- # set -e 00:08:35.683 14:49:08 -- nvmf/common.sh@124 -- # return 0 00:08:35.683 14:49:08 -- nvmf/common.sh@477 -- # '[' -n '' ']' 00:08:35.683 14:49:08 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:08:35.683 14:49:08 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:08:35.683 14:49:08 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:08:35.683 14:49:08 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:35.683 14:49:08 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:08:35.683 14:49:08 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:35.683 14:49:08 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:35.683 14:49:08 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:35.683 14:49:08 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:08:35.683 00:08:35.683 real 0m29.590s 00:08:35.683 user 1m50.914s 00:08:35.683 sys 0m3.576s 00:08:35.683 14:49:08 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:35.683 14:49:08 -- common/autotest_common.sh@10 -- # set +x 00:08:35.683 ************************************ 00:08:35.683 END TEST nvmf_filesystem 00:08:35.683 ************************************ 00:08:35.943 14:49:08 -- nvmf/nvmf.sh@25 -- # run_test nvmf_discovery /home/vagrant/spdk_repo/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:08:35.943 14:49:08 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:08:35.943 14:49:08 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:35.943 14:49:08 -- common/autotest_common.sh@10 -- # set +x 00:08:35.943 ************************************ 00:08:35.943 START TEST nvmf_discovery 00:08:35.943 ************************************ 00:08:35.943 14:49:08 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:08:35.943 * Looking for test storage... 00:08:35.943 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:08:35.943 14:49:08 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:08:35.943 14:49:08 -- common/autotest_common.sh@1690 -- # lcov --version 00:08:35.943 14:49:08 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:08:35.943 14:49:08 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:08:35.943 14:49:08 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:08:35.943 14:49:08 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:08:35.943 14:49:08 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:08:35.943 14:49:08 -- scripts/common.sh@335 -- # IFS=.-: 00:08:35.943 14:49:08 -- scripts/common.sh@335 -- # read -ra ver1 00:08:35.943 14:49:08 -- scripts/common.sh@336 -- # IFS=.-: 00:08:35.943 14:49:08 -- scripts/common.sh@336 -- # read -ra ver2 00:08:35.943 14:49:08 -- scripts/common.sh@337 -- # local 'op=<' 00:08:35.943 14:49:08 -- scripts/common.sh@339 -- # ver1_l=2 00:08:35.943 14:49:08 -- scripts/common.sh@340 -- # ver2_l=1 00:08:35.943 14:49:08 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:08:35.943 14:49:08 -- scripts/common.sh@343 -- # case "$op" in 00:08:35.943 14:49:08 -- scripts/common.sh@344 -- # : 1 00:08:35.943 14:49:08 -- scripts/common.sh@363 -- # (( v = 0 )) 00:08:35.943 14:49:08 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:35.943 14:49:08 -- scripts/common.sh@364 -- # decimal 1 00:08:35.943 14:49:08 -- scripts/common.sh@352 -- # local d=1 00:08:35.943 14:49:08 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:35.943 14:49:08 -- scripts/common.sh@354 -- # echo 1 00:08:35.943 14:49:08 -- scripts/common.sh@364 -- # ver1[v]=1 00:08:35.943 14:49:08 -- scripts/common.sh@365 -- # decimal 2 00:08:35.943 14:49:08 -- scripts/common.sh@352 -- # local d=2 00:08:35.943 14:49:08 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:35.943 14:49:08 -- scripts/common.sh@354 -- # echo 2 00:08:35.943 14:49:08 -- scripts/common.sh@365 -- # ver2[v]=2 00:08:35.943 14:49:08 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:08:35.943 14:49:08 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:08:35.943 14:49:08 -- scripts/common.sh@367 -- # return 0 00:08:35.943 14:49:08 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:35.943 14:49:08 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:08:35.943 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:35.943 --rc genhtml_branch_coverage=1 00:08:35.943 --rc genhtml_function_coverage=1 00:08:35.943 --rc genhtml_legend=1 00:08:35.943 --rc geninfo_all_blocks=1 00:08:35.943 --rc geninfo_unexecuted_blocks=1 00:08:35.943 00:08:35.943 ' 00:08:35.943 14:49:08 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:08:35.943 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:35.943 --rc genhtml_branch_coverage=1 00:08:35.943 --rc genhtml_function_coverage=1 00:08:35.943 --rc genhtml_legend=1 00:08:35.943 --rc geninfo_all_blocks=1 00:08:35.943 --rc geninfo_unexecuted_blocks=1 00:08:35.943 00:08:35.943 ' 00:08:35.943 14:49:08 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:08:35.943 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:35.943 --rc genhtml_branch_coverage=1 00:08:35.943 --rc genhtml_function_coverage=1 00:08:35.943 --rc genhtml_legend=1 00:08:35.943 --rc geninfo_all_blocks=1 00:08:35.943 --rc geninfo_unexecuted_blocks=1 00:08:35.943 00:08:35.943 ' 00:08:35.943 14:49:08 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:08:35.943 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:35.943 --rc genhtml_branch_coverage=1 00:08:35.943 --rc genhtml_function_coverage=1 00:08:35.943 --rc genhtml_legend=1 00:08:35.944 --rc geninfo_all_blocks=1 00:08:35.944 --rc geninfo_unexecuted_blocks=1 00:08:35.944 00:08:35.944 ' 00:08:35.944 14:49:08 -- target/discovery.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:08:35.944 14:49:08 -- nvmf/common.sh@7 -- # uname -s 00:08:35.944 14:49:08 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:35.944 14:49:08 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:35.944 14:49:08 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:35.944 14:49:08 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:35.944 14:49:08 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:35.944 14:49:08 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:35.944 14:49:08 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:35.944 14:49:08 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:35.944 14:49:08 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:35.944 14:49:08 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:35.944 14:49:08 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:2d843004-a791-47f3-8dd7-3d04462c368b 00:08:35.944 14:49:08 -- nvmf/common.sh@18 -- # NVME_HOSTID=2d843004-a791-47f3-8dd7-3d04462c368b 00:08:35.944 14:49:08 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:35.944 14:49:08 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:35.944 14:49:08 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:08:35.944 14:49:08 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:35.944 14:49:09 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:35.944 14:49:09 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:35.944 14:49:09 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:35.944 14:49:09 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:35.944 14:49:09 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:35.944 14:49:09 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:35.944 14:49:09 -- paths/export.sh@5 -- # export PATH 00:08:35.944 14:49:09 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:35.944 14:49:09 -- nvmf/common.sh@46 -- # : 0 00:08:35.944 14:49:09 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:08:35.944 14:49:09 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:08:35.944 14:49:09 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:08:35.944 14:49:09 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:35.944 14:49:09 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:35.944 14:49:09 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:08:35.944 14:49:09 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:08:35.944 14:49:09 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:08:35.944 14:49:09 -- target/discovery.sh@11 -- # NULL_BDEV_SIZE=102400 00:08:35.944 14:49:09 -- target/discovery.sh@12 -- # NULL_BLOCK_SIZE=512 00:08:35.944 14:49:09 -- target/discovery.sh@13 -- # NVMF_PORT_REFERRAL=4430 00:08:35.944 14:49:09 -- target/discovery.sh@15 -- # hash nvme 00:08:35.944 14:49:09 -- target/discovery.sh@20 -- # nvmftestinit 00:08:35.944 14:49:09 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:08:35.944 14:49:09 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:35.944 14:49:09 -- nvmf/common.sh@436 -- # prepare_net_devs 00:08:35.944 14:49:09 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:08:35.944 14:49:09 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:08:35.944 14:49:09 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:35.944 14:49:09 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:35.944 14:49:09 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:35.944 14:49:09 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:08:35.944 14:49:09 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:08:35.944 14:49:09 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:08:35.944 14:49:09 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:08:35.944 14:49:09 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:08:35.944 14:49:09 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:08:35.944 14:49:09 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:35.944 14:49:09 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:35.944 14:49:09 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:08:35.944 14:49:09 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:08:35.944 14:49:09 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:08:35.944 14:49:09 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:08:35.944 14:49:09 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:08:35.944 14:49:09 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:35.944 14:49:09 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:08:35.944 14:49:09 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:08:35.944 14:49:09 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:08:35.944 14:49:09 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:08:35.944 14:49:09 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:08:35.944 14:49:09 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:08:35.944 Cannot find device "nvmf_tgt_br" 00:08:35.944 14:49:09 -- nvmf/common.sh@154 -- # true 00:08:35.944 14:49:09 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:08:36.203 Cannot find device "nvmf_tgt_br2" 00:08:36.203 14:49:09 -- nvmf/common.sh@155 -- # true 00:08:36.203 14:49:09 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:08:36.203 14:49:09 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:08:36.203 Cannot find device "nvmf_tgt_br" 00:08:36.203 14:49:09 -- nvmf/common.sh@157 -- # true 00:08:36.203 14:49:09 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:08:36.203 Cannot find device "nvmf_tgt_br2" 00:08:36.203 14:49:09 -- nvmf/common.sh@158 -- # true 00:08:36.203 14:49:09 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:08:36.203 14:49:09 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:08:36.203 14:49:09 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:08:36.203 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:36.203 14:49:09 -- nvmf/common.sh@161 -- # true 00:08:36.203 14:49:09 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:08:36.203 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:36.203 14:49:09 -- nvmf/common.sh@162 -- # true 00:08:36.203 14:49:09 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:08:36.203 14:49:09 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:08:36.203 14:49:09 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:08:36.203 14:49:09 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:08:36.203 14:49:09 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:08:36.203 14:49:09 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:08:36.203 14:49:09 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:08:36.203 14:49:09 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:08:36.203 14:49:09 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:08:36.203 14:49:09 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:08:36.203 14:49:09 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:08:36.203 14:49:09 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:08:36.203 14:49:09 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:08:36.203 14:49:09 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:08:36.203 14:49:09 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:08:36.203 14:49:09 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:08:36.203 14:49:09 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:08:36.203 14:49:09 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:08:36.203 14:49:09 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:08:36.203 14:49:09 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:08:36.203 14:49:09 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:08:36.203 14:49:09 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:08:36.203 14:49:09 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:08:36.203 14:49:09 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:08:36.203 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:36.203 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.064 ms 00:08:36.203 00:08:36.203 --- 10.0.0.2 ping statistics --- 00:08:36.203 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:36.203 rtt min/avg/max/mdev = 0.064/0.064/0.064/0.000 ms 00:08:36.203 14:49:09 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:08:36.203 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:08:36.203 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.044 ms 00:08:36.203 00:08:36.203 --- 10.0.0.3 ping statistics --- 00:08:36.203 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:36.203 rtt min/avg/max/mdev = 0.044/0.044/0.044/0.000 ms 00:08:36.203 14:49:09 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:08:36.203 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:36.203 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.026 ms 00:08:36.203 00:08:36.203 --- 10.0.0.1 ping statistics --- 00:08:36.203 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:36.203 rtt min/avg/max/mdev = 0.026/0.026/0.026/0.000 ms 00:08:36.203 14:49:09 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:36.203 14:49:09 -- nvmf/common.sh@421 -- # return 0 00:08:36.203 14:49:09 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:08:36.203 14:49:09 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:36.203 14:49:09 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:08:36.203 14:49:09 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:08:36.203 14:49:09 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:36.203 14:49:09 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:08:36.203 14:49:09 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:08:36.462 14:49:09 -- target/discovery.sh@21 -- # nvmfappstart -m 0xF 00:08:36.462 14:49:09 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:08:36.462 14:49:09 -- common/autotest_common.sh@722 -- # xtrace_disable 00:08:36.462 14:49:09 -- common/autotest_common.sh@10 -- # set +x 00:08:36.462 14:49:09 -- nvmf/common.sh@469 -- # nvmfpid=73474 00:08:36.462 14:49:09 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:08:36.462 14:49:09 -- nvmf/common.sh@470 -- # waitforlisten 73474 00:08:36.462 14:49:09 -- common/autotest_common.sh@829 -- # '[' -z 73474 ']' 00:08:36.462 14:49:09 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:36.462 14:49:09 -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:36.462 14:49:09 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:36.462 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:36.462 14:49:09 -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:36.462 14:49:09 -- common/autotest_common.sh@10 -- # set +x 00:08:36.462 [2024-12-01 14:49:09.378746] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:08:36.462 [2024-12-01 14:49:09.378821] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:36.462 [2024-12-01 14:49:09.508545] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:36.462 [2024-12-01 14:49:09.562412] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:08:36.462 [2024-12-01 14:49:09.562840] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:36.462 [2024-12-01 14:49:09.562981] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:36.462 [2024-12-01 14:49:09.563095] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:36.462 [2024-12-01 14:49:09.563418] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:08:36.462 [2024-12-01 14:49:09.563484] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:08:36.462 [2024-12-01 14:49:09.563606] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:36.462 [2024-12-01 14:49:09.563546] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:08:37.398 14:49:10 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:37.398 14:49:10 -- common/autotest_common.sh@862 -- # return 0 00:08:37.398 14:49:10 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:08:37.398 14:49:10 -- common/autotest_common.sh@728 -- # xtrace_disable 00:08:37.398 14:49:10 -- common/autotest_common.sh@10 -- # set +x 00:08:37.398 14:49:10 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:37.398 14:49:10 -- target/discovery.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:37.398 14:49:10 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:37.398 14:49:10 -- common/autotest_common.sh@10 -- # set +x 00:08:37.398 [2024-12-01 14:49:10.444407] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:37.398 14:49:10 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:37.398 14:49:10 -- target/discovery.sh@26 -- # seq 1 4 00:08:37.398 14:49:10 -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:08:37.398 14:49:10 -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null1 102400 512 00:08:37.398 14:49:10 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:37.398 14:49:10 -- common/autotest_common.sh@10 -- # set +x 00:08:37.398 Null1 00:08:37.398 14:49:10 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:37.398 14:49:10 -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:08:37.398 14:49:10 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:37.398 14:49:10 -- common/autotest_common.sh@10 -- # set +x 00:08:37.398 14:49:10 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:37.398 14:49:10 -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Null1 00:08:37.398 14:49:10 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:37.398 14:49:10 -- common/autotest_common.sh@10 -- # set +x 00:08:37.398 14:49:10 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:37.398 14:49:10 -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:37.398 14:49:10 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:37.398 14:49:10 -- common/autotest_common.sh@10 -- # set +x 00:08:37.398 [2024-12-01 14:49:10.507141] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:37.658 14:49:10 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:37.658 14:49:10 -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:08:37.658 14:49:10 -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null2 102400 512 00:08:37.658 14:49:10 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:37.658 14:49:10 -- common/autotest_common.sh@10 -- # set +x 00:08:37.658 Null2 00:08:37.658 14:49:10 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:37.658 14:49:10 -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:08:37.658 14:49:10 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:37.658 14:49:10 -- common/autotest_common.sh@10 -- # set +x 00:08:37.658 14:49:10 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:37.658 14:49:10 -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Null2 00:08:37.658 14:49:10 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:37.658 14:49:10 -- common/autotest_common.sh@10 -- # set +x 00:08:37.658 14:49:10 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:37.658 14:49:10 -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:08:37.658 14:49:10 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:37.658 14:49:10 -- common/autotest_common.sh@10 -- # set +x 00:08:37.658 14:49:10 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:37.658 14:49:10 -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:08:37.658 14:49:10 -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null3 102400 512 00:08:37.658 14:49:10 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:37.658 14:49:10 -- common/autotest_common.sh@10 -- # set +x 00:08:37.658 Null3 00:08:37.658 14:49:10 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:37.658 14:49:10 -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000003 00:08:37.658 14:49:10 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:37.658 14:49:10 -- common/autotest_common.sh@10 -- # set +x 00:08:37.658 14:49:10 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:37.658 14:49:10 -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Null3 00:08:37.658 14:49:10 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:37.658 14:49:10 -- common/autotest_common.sh@10 -- # set +x 00:08:37.658 14:49:10 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:37.658 14:49:10 -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:08:37.658 14:49:10 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:37.658 14:49:10 -- common/autotest_common.sh@10 -- # set +x 00:08:37.658 14:49:10 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:37.658 14:49:10 -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:08:37.658 14:49:10 -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null4 102400 512 00:08:37.658 14:49:10 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:37.658 14:49:10 -- common/autotest_common.sh@10 -- # set +x 00:08:37.658 Null4 00:08:37.658 14:49:10 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:37.658 14:49:10 -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK00000000000004 00:08:37.658 14:49:10 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:37.658 14:49:10 -- common/autotest_common.sh@10 -- # set +x 00:08:37.658 14:49:10 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:37.658 14:49:10 -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Null4 00:08:37.658 14:49:10 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:37.658 14:49:10 -- common/autotest_common.sh@10 -- # set +x 00:08:37.658 14:49:10 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:37.658 14:49:10 -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.2 -s 4420 00:08:37.658 14:49:10 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:37.658 14:49:10 -- common/autotest_common.sh@10 -- # set +x 00:08:37.658 14:49:10 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:37.658 14:49:10 -- target/discovery.sh@32 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:08:37.658 14:49:10 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:37.658 14:49:10 -- common/autotest_common.sh@10 -- # set +x 00:08:37.658 14:49:10 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:37.658 14:49:10 -- target/discovery.sh@35 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 10.0.0.2 -s 4430 00:08:37.658 14:49:10 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:37.658 14:49:10 -- common/autotest_common.sh@10 -- # set +x 00:08:37.658 14:49:10 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:37.658 14:49:10 -- target/discovery.sh@37 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:2d843004-a791-47f3-8dd7-3d04462c368b --hostid=2d843004-a791-47f3-8dd7-3d04462c368b -t tcp -a 10.0.0.2 -s 4420 00:08:37.658 00:08:37.658 Discovery Log Number of Records 6, Generation counter 6 00:08:37.658 =====Discovery Log Entry 0====== 00:08:37.658 trtype: tcp 00:08:37.658 adrfam: ipv4 00:08:37.658 subtype: current discovery subsystem 00:08:37.658 treq: not required 00:08:37.658 portid: 0 00:08:37.658 trsvcid: 4420 00:08:37.658 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:08:37.658 traddr: 10.0.0.2 00:08:37.658 eflags: explicit discovery connections, duplicate discovery information 00:08:37.658 sectype: none 00:08:37.658 =====Discovery Log Entry 1====== 00:08:37.658 trtype: tcp 00:08:37.658 adrfam: ipv4 00:08:37.658 subtype: nvme subsystem 00:08:37.658 treq: not required 00:08:37.658 portid: 0 00:08:37.658 trsvcid: 4420 00:08:37.658 subnqn: nqn.2016-06.io.spdk:cnode1 00:08:37.658 traddr: 10.0.0.2 00:08:37.658 eflags: none 00:08:37.658 sectype: none 00:08:37.658 =====Discovery Log Entry 2====== 00:08:37.658 trtype: tcp 00:08:37.658 adrfam: ipv4 00:08:37.658 subtype: nvme subsystem 00:08:37.658 treq: not required 00:08:37.658 portid: 0 00:08:37.658 trsvcid: 4420 00:08:37.658 subnqn: nqn.2016-06.io.spdk:cnode2 00:08:37.658 traddr: 10.0.0.2 00:08:37.658 eflags: none 00:08:37.658 sectype: none 00:08:37.658 =====Discovery Log Entry 3====== 00:08:37.658 trtype: tcp 00:08:37.658 adrfam: ipv4 00:08:37.658 subtype: nvme subsystem 00:08:37.658 treq: not required 00:08:37.658 portid: 0 00:08:37.658 trsvcid: 4420 00:08:37.658 subnqn: nqn.2016-06.io.spdk:cnode3 00:08:37.658 traddr: 10.0.0.2 00:08:37.658 eflags: none 00:08:37.658 sectype: none 00:08:37.658 =====Discovery Log Entry 4====== 00:08:37.658 trtype: tcp 00:08:37.658 adrfam: ipv4 00:08:37.658 subtype: nvme subsystem 00:08:37.658 treq: not required 00:08:37.658 portid: 0 00:08:37.658 trsvcid: 4420 00:08:37.658 subnqn: nqn.2016-06.io.spdk:cnode4 00:08:37.658 traddr: 10.0.0.2 00:08:37.658 eflags: none 00:08:37.658 sectype: none 00:08:37.658 =====Discovery Log Entry 5====== 00:08:37.658 trtype: tcp 00:08:37.658 adrfam: ipv4 00:08:37.658 subtype: discovery subsystem referral 00:08:37.658 treq: not required 00:08:37.658 portid: 0 00:08:37.658 trsvcid: 4430 00:08:37.658 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:08:37.658 traddr: 10.0.0.2 00:08:37.658 eflags: none 00:08:37.658 sectype: none 00:08:37.658 Perform nvmf subsystem discovery via RPC 00:08:37.658 14:49:10 -- target/discovery.sh@39 -- # echo 'Perform nvmf subsystem discovery via RPC' 00:08:37.659 14:49:10 -- target/discovery.sh@40 -- # rpc_cmd nvmf_get_subsystems 00:08:37.659 14:49:10 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:37.659 14:49:10 -- common/autotest_common.sh@10 -- # set +x 00:08:37.659 [2024-12-01 14:49:10.739328] nvmf_rpc.c: 275:rpc_nvmf_get_subsystems: *WARNING*: rpc_nvmf_get_subsystems: deprecated feature listener.transport is deprecated in favor of trtype to be removed in v24.05 00:08:37.659 [ 00:08:37.659 { 00:08:37.659 "allow_any_host": true, 00:08:37.659 "hosts": [], 00:08:37.659 "listen_addresses": [ 00:08:37.659 { 00:08:37.659 "adrfam": "IPv4", 00:08:37.659 "traddr": "10.0.0.2", 00:08:37.659 "transport": "TCP", 00:08:37.659 "trsvcid": "4420", 00:08:37.659 "trtype": "TCP" 00:08:37.659 } 00:08:37.659 ], 00:08:37.659 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:08:37.659 "subtype": "Discovery" 00:08:37.659 }, 00:08:37.659 { 00:08:37.659 "allow_any_host": true, 00:08:37.659 "hosts": [], 00:08:37.659 "listen_addresses": [ 00:08:37.659 { 00:08:37.659 "adrfam": "IPv4", 00:08:37.659 "traddr": "10.0.0.2", 00:08:37.659 "transport": "TCP", 00:08:37.659 "trsvcid": "4420", 00:08:37.659 "trtype": "TCP" 00:08:37.659 } 00:08:37.659 ], 00:08:37.659 "max_cntlid": 65519, 00:08:37.659 "max_namespaces": 32, 00:08:37.659 "min_cntlid": 1, 00:08:37.659 "model_number": "SPDK bdev Controller", 00:08:37.659 "namespaces": [ 00:08:37.659 { 00:08:37.659 "bdev_name": "Null1", 00:08:37.659 "name": "Null1", 00:08:37.659 "nguid": "8750F7DA4E284181A0075195F4702417", 00:08:37.659 "nsid": 1, 00:08:37.659 "uuid": "8750f7da-4e28-4181-a007-5195f4702417" 00:08:37.659 } 00:08:37.659 ], 00:08:37.659 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:08:37.659 "serial_number": "SPDK00000000000001", 00:08:37.659 "subtype": "NVMe" 00:08:37.659 }, 00:08:37.659 { 00:08:37.659 "allow_any_host": true, 00:08:37.659 "hosts": [], 00:08:37.659 "listen_addresses": [ 00:08:37.659 { 00:08:37.659 "adrfam": "IPv4", 00:08:37.659 "traddr": "10.0.0.2", 00:08:37.659 "transport": "TCP", 00:08:37.659 "trsvcid": "4420", 00:08:37.659 "trtype": "TCP" 00:08:37.659 } 00:08:37.659 ], 00:08:37.659 "max_cntlid": 65519, 00:08:37.659 "max_namespaces": 32, 00:08:37.659 "min_cntlid": 1, 00:08:37.659 "model_number": "SPDK bdev Controller", 00:08:37.659 "namespaces": [ 00:08:37.659 { 00:08:37.659 "bdev_name": "Null2", 00:08:37.659 "name": "Null2", 00:08:37.659 "nguid": "9139624E002F4A3C943D774BCF7F5160", 00:08:37.659 "nsid": 1, 00:08:37.659 "uuid": "9139624e-002f-4a3c-943d-774bcf7f5160" 00:08:37.659 } 00:08:37.659 ], 00:08:37.659 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:08:37.659 "serial_number": "SPDK00000000000002", 00:08:37.659 "subtype": "NVMe" 00:08:37.659 }, 00:08:37.659 { 00:08:37.659 "allow_any_host": true, 00:08:37.659 "hosts": [], 00:08:37.659 "listen_addresses": [ 00:08:37.659 { 00:08:37.659 "adrfam": "IPv4", 00:08:37.659 "traddr": "10.0.0.2", 00:08:37.659 "transport": "TCP", 00:08:37.659 "trsvcid": "4420", 00:08:37.659 "trtype": "TCP" 00:08:37.659 } 00:08:37.659 ], 00:08:37.659 "max_cntlid": 65519, 00:08:37.659 "max_namespaces": 32, 00:08:37.659 "min_cntlid": 1, 00:08:37.659 "model_number": "SPDK bdev Controller", 00:08:37.659 "namespaces": [ 00:08:37.659 { 00:08:37.659 "bdev_name": "Null3", 00:08:37.659 "name": "Null3", 00:08:37.659 "nguid": "C985DD0337624845AE01915CB7F1FB0B", 00:08:37.659 "nsid": 1, 00:08:37.659 "uuid": "c985dd03-3762-4845-ae01-915cb7f1fb0b" 00:08:37.659 } 00:08:37.659 ], 00:08:37.659 "nqn": "nqn.2016-06.io.spdk:cnode3", 00:08:37.659 "serial_number": "SPDK00000000000003", 00:08:37.659 "subtype": "NVMe" 00:08:37.659 }, 00:08:37.659 { 00:08:37.659 "allow_any_host": true, 00:08:37.659 "hosts": [], 00:08:37.659 "listen_addresses": [ 00:08:37.659 { 00:08:37.659 "adrfam": "IPv4", 00:08:37.659 "traddr": "10.0.0.2", 00:08:37.659 "transport": "TCP", 00:08:37.659 "trsvcid": "4420", 00:08:37.659 "trtype": "TCP" 00:08:37.659 } 00:08:37.659 ], 00:08:37.659 "max_cntlid": 65519, 00:08:37.659 "max_namespaces": 32, 00:08:37.659 "min_cntlid": 1, 00:08:37.659 "model_number": "SPDK bdev Controller", 00:08:37.659 "namespaces": [ 00:08:37.659 { 00:08:37.659 "bdev_name": "Null4", 00:08:37.659 "name": "Null4", 00:08:37.659 "nguid": "AC80DAC84C0944C3812AA03D96C1E494", 00:08:37.659 "nsid": 1, 00:08:37.659 "uuid": "ac80dac8-4c09-44c3-812a-a03d96c1e494" 00:08:37.659 } 00:08:37.659 ], 00:08:37.659 "nqn": "nqn.2016-06.io.spdk:cnode4", 00:08:37.659 "serial_number": "SPDK00000000000004", 00:08:37.659 "subtype": "NVMe" 00:08:37.659 } 00:08:37.659 ] 00:08:37.919 14:49:10 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:37.919 14:49:10 -- target/discovery.sh@42 -- # seq 1 4 00:08:37.919 14:49:10 -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:08:37.919 14:49:10 -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:37.919 14:49:10 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:37.919 14:49:10 -- common/autotest_common.sh@10 -- # set +x 00:08:37.919 14:49:10 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:37.919 14:49:10 -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null1 00:08:37.919 14:49:10 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:37.919 14:49:10 -- common/autotest_common.sh@10 -- # set +x 00:08:37.919 14:49:10 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:37.919 14:49:10 -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:08:37.919 14:49:10 -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:08:37.919 14:49:10 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:37.919 14:49:10 -- common/autotest_common.sh@10 -- # set +x 00:08:37.919 14:49:10 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:37.919 14:49:10 -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null2 00:08:37.919 14:49:10 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:37.919 14:49:10 -- common/autotest_common.sh@10 -- # set +x 00:08:37.919 14:49:10 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:37.919 14:49:10 -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:08:37.919 14:49:10 -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:08:37.919 14:49:10 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:37.919 14:49:10 -- common/autotest_common.sh@10 -- # set +x 00:08:37.919 14:49:10 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:37.919 14:49:10 -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null3 00:08:37.919 14:49:10 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:37.919 14:49:10 -- common/autotest_common.sh@10 -- # set +x 00:08:37.919 14:49:10 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:37.919 14:49:10 -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:08:37.919 14:49:10 -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:08:37.919 14:49:10 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:37.919 14:49:10 -- common/autotest_common.sh@10 -- # set +x 00:08:37.919 14:49:10 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:37.919 14:49:10 -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null4 00:08:37.919 14:49:10 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:37.919 14:49:10 -- common/autotest_common.sh@10 -- # set +x 00:08:37.919 14:49:10 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:37.919 14:49:10 -- target/discovery.sh@47 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 10.0.0.2 -s 4430 00:08:37.919 14:49:10 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:37.919 14:49:10 -- common/autotest_common.sh@10 -- # set +x 00:08:37.919 14:49:10 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:37.919 14:49:10 -- target/discovery.sh@49 -- # rpc_cmd bdev_get_bdevs 00:08:37.919 14:49:10 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:37.919 14:49:10 -- target/discovery.sh@49 -- # jq -r '.[].name' 00:08:37.919 14:49:10 -- common/autotest_common.sh@10 -- # set +x 00:08:37.919 14:49:10 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:37.919 14:49:10 -- target/discovery.sh@49 -- # check_bdevs= 00:08:37.919 14:49:10 -- target/discovery.sh@50 -- # '[' -n '' ']' 00:08:37.919 14:49:10 -- target/discovery.sh@55 -- # trap - SIGINT SIGTERM EXIT 00:08:37.919 14:49:10 -- target/discovery.sh@57 -- # nvmftestfini 00:08:37.919 14:49:10 -- nvmf/common.sh@476 -- # nvmfcleanup 00:08:37.919 14:49:10 -- nvmf/common.sh@116 -- # sync 00:08:37.919 14:49:10 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:08:37.919 14:49:10 -- nvmf/common.sh@119 -- # set +e 00:08:37.919 14:49:10 -- nvmf/common.sh@120 -- # for i in {1..20} 00:08:37.919 14:49:10 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:08:37.919 rmmod nvme_tcp 00:08:37.919 rmmod nvme_fabrics 00:08:37.919 rmmod nvme_keyring 00:08:37.919 14:49:10 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:08:37.919 14:49:10 -- nvmf/common.sh@123 -- # set -e 00:08:37.919 14:49:10 -- nvmf/common.sh@124 -- # return 0 00:08:37.919 14:49:10 -- nvmf/common.sh@477 -- # '[' -n 73474 ']' 00:08:37.919 14:49:10 -- nvmf/common.sh@478 -- # killprocess 73474 00:08:37.919 14:49:10 -- common/autotest_common.sh@936 -- # '[' -z 73474 ']' 00:08:37.919 14:49:10 -- common/autotest_common.sh@940 -- # kill -0 73474 00:08:37.919 14:49:10 -- common/autotest_common.sh@941 -- # uname 00:08:37.919 14:49:10 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:08:37.919 14:49:10 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 73474 00:08:37.919 killing process with pid 73474 00:08:37.919 14:49:11 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:08:37.919 14:49:11 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:08:37.919 14:49:11 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 73474' 00:08:37.919 14:49:11 -- common/autotest_common.sh@955 -- # kill 73474 00:08:37.919 [2024-12-01 14:49:11.029866] app.c: 883:log_deprecation_hits: *WARNING*: rpc_nvmf_get_subsystems: deprecation 'listener.transport is deprecated in favor of trtype' scheduled for removal in v24.05 hit 1 times 00:08:37.919 14:49:11 -- common/autotest_common.sh@960 -- # wait 73474 00:08:38.178 14:49:11 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:08:38.178 14:49:11 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:08:38.178 14:49:11 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:08:38.178 14:49:11 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:38.178 14:49:11 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:08:38.178 14:49:11 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:38.178 14:49:11 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:38.178 14:49:11 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:38.178 14:49:11 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:08:38.178 00:08:38.178 real 0m2.435s 00:08:38.178 user 0m6.988s 00:08:38.178 sys 0m0.649s 00:08:38.178 14:49:11 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:38.178 14:49:11 -- common/autotest_common.sh@10 -- # set +x 00:08:38.178 ************************************ 00:08:38.178 END TEST nvmf_discovery 00:08:38.178 ************************************ 00:08:38.438 14:49:11 -- nvmf/nvmf.sh@26 -- # run_test nvmf_referrals /home/vagrant/spdk_repo/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:08:38.438 14:49:11 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:08:38.438 14:49:11 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:38.438 14:49:11 -- common/autotest_common.sh@10 -- # set +x 00:08:38.438 ************************************ 00:08:38.438 START TEST nvmf_referrals 00:08:38.438 ************************************ 00:08:38.438 14:49:11 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:08:38.438 * Looking for test storage... 00:08:38.438 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:08:38.438 14:49:11 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:08:38.438 14:49:11 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:08:38.438 14:49:11 -- common/autotest_common.sh@1690 -- # lcov --version 00:08:38.438 14:49:11 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:08:38.438 14:49:11 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:08:38.438 14:49:11 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:08:38.438 14:49:11 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:08:38.438 14:49:11 -- scripts/common.sh@335 -- # IFS=.-: 00:08:38.438 14:49:11 -- scripts/common.sh@335 -- # read -ra ver1 00:08:38.438 14:49:11 -- scripts/common.sh@336 -- # IFS=.-: 00:08:38.439 14:49:11 -- scripts/common.sh@336 -- # read -ra ver2 00:08:38.439 14:49:11 -- scripts/common.sh@337 -- # local 'op=<' 00:08:38.439 14:49:11 -- scripts/common.sh@339 -- # ver1_l=2 00:08:38.439 14:49:11 -- scripts/common.sh@340 -- # ver2_l=1 00:08:38.439 14:49:11 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:08:38.439 14:49:11 -- scripts/common.sh@343 -- # case "$op" in 00:08:38.439 14:49:11 -- scripts/common.sh@344 -- # : 1 00:08:38.439 14:49:11 -- scripts/common.sh@363 -- # (( v = 0 )) 00:08:38.439 14:49:11 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:38.439 14:49:11 -- scripts/common.sh@364 -- # decimal 1 00:08:38.439 14:49:11 -- scripts/common.sh@352 -- # local d=1 00:08:38.439 14:49:11 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:38.439 14:49:11 -- scripts/common.sh@354 -- # echo 1 00:08:38.439 14:49:11 -- scripts/common.sh@364 -- # ver1[v]=1 00:08:38.439 14:49:11 -- scripts/common.sh@365 -- # decimal 2 00:08:38.439 14:49:11 -- scripts/common.sh@352 -- # local d=2 00:08:38.439 14:49:11 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:38.439 14:49:11 -- scripts/common.sh@354 -- # echo 2 00:08:38.439 14:49:11 -- scripts/common.sh@365 -- # ver2[v]=2 00:08:38.439 14:49:11 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:08:38.439 14:49:11 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:08:38.439 14:49:11 -- scripts/common.sh@367 -- # return 0 00:08:38.439 14:49:11 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:38.439 14:49:11 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:08:38.439 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:38.439 --rc genhtml_branch_coverage=1 00:08:38.439 --rc genhtml_function_coverage=1 00:08:38.439 --rc genhtml_legend=1 00:08:38.439 --rc geninfo_all_blocks=1 00:08:38.439 --rc geninfo_unexecuted_blocks=1 00:08:38.439 00:08:38.439 ' 00:08:38.439 14:49:11 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:08:38.439 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:38.439 --rc genhtml_branch_coverage=1 00:08:38.439 --rc genhtml_function_coverage=1 00:08:38.439 --rc genhtml_legend=1 00:08:38.439 --rc geninfo_all_blocks=1 00:08:38.439 --rc geninfo_unexecuted_blocks=1 00:08:38.439 00:08:38.439 ' 00:08:38.439 14:49:11 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:08:38.439 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:38.439 --rc genhtml_branch_coverage=1 00:08:38.439 --rc genhtml_function_coverage=1 00:08:38.439 --rc genhtml_legend=1 00:08:38.439 --rc geninfo_all_blocks=1 00:08:38.439 --rc geninfo_unexecuted_blocks=1 00:08:38.439 00:08:38.439 ' 00:08:38.439 14:49:11 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:08:38.439 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:38.439 --rc genhtml_branch_coverage=1 00:08:38.439 --rc genhtml_function_coverage=1 00:08:38.439 --rc genhtml_legend=1 00:08:38.439 --rc geninfo_all_blocks=1 00:08:38.439 --rc geninfo_unexecuted_blocks=1 00:08:38.439 00:08:38.439 ' 00:08:38.439 14:49:11 -- target/referrals.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:08:38.439 14:49:11 -- nvmf/common.sh@7 -- # uname -s 00:08:38.439 14:49:11 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:38.439 14:49:11 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:38.439 14:49:11 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:38.439 14:49:11 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:38.439 14:49:11 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:38.439 14:49:11 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:38.439 14:49:11 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:38.439 14:49:11 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:38.439 14:49:11 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:38.439 14:49:11 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:38.439 14:49:11 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:2d843004-a791-47f3-8dd7-3d04462c368b 00:08:38.439 14:49:11 -- nvmf/common.sh@18 -- # NVME_HOSTID=2d843004-a791-47f3-8dd7-3d04462c368b 00:08:38.439 14:49:11 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:38.439 14:49:11 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:38.439 14:49:11 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:08:38.439 14:49:11 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:38.439 14:49:11 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:38.439 14:49:11 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:38.439 14:49:11 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:38.439 14:49:11 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:38.439 14:49:11 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:38.439 14:49:11 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:38.439 14:49:11 -- paths/export.sh@5 -- # export PATH 00:08:38.439 14:49:11 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:38.439 14:49:11 -- nvmf/common.sh@46 -- # : 0 00:08:38.439 14:49:11 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:08:38.439 14:49:11 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:08:38.439 14:49:11 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:08:38.439 14:49:11 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:38.439 14:49:11 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:38.439 14:49:11 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:08:38.439 14:49:11 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:08:38.439 14:49:11 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:08:38.439 14:49:11 -- target/referrals.sh@11 -- # NVMF_REFERRAL_IP_1=127.0.0.2 00:08:38.439 14:49:11 -- target/referrals.sh@12 -- # NVMF_REFERRAL_IP_2=127.0.0.3 00:08:38.439 14:49:11 -- target/referrals.sh@13 -- # NVMF_REFERRAL_IP_3=127.0.0.4 00:08:38.439 14:49:11 -- target/referrals.sh@14 -- # NVMF_PORT_REFERRAL=4430 00:08:38.439 14:49:11 -- target/referrals.sh@15 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:08:38.439 14:49:11 -- target/referrals.sh@16 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:08:38.439 14:49:11 -- target/referrals.sh@37 -- # nvmftestinit 00:08:38.439 14:49:11 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:08:38.439 14:49:11 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:38.439 14:49:11 -- nvmf/common.sh@436 -- # prepare_net_devs 00:08:38.439 14:49:11 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:08:38.439 14:49:11 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:08:38.439 14:49:11 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:38.439 14:49:11 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:38.439 14:49:11 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:38.439 14:49:11 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:08:38.439 14:49:11 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:08:38.439 14:49:11 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:08:38.439 14:49:11 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:08:38.439 14:49:11 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:08:38.439 14:49:11 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:08:38.439 14:49:11 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:38.439 14:49:11 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:38.439 14:49:11 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:08:38.439 14:49:11 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:08:38.439 14:49:11 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:08:38.439 14:49:11 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:08:38.439 14:49:11 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:08:38.439 14:49:11 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:38.439 14:49:11 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:08:38.439 14:49:11 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:08:38.439 14:49:11 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:08:38.439 14:49:11 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:08:38.439 14:49:11 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:08:38.439 14:49:11 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:08:38.439 Cannot find device "nvmf_tgt_br" 00:08:38.439 14:49:11 -- nvmf/common.sh@154 -- # true 00:08:38.439 14:49:11 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:08:38.699 Cannot find device "nvmf_tgt_br2" 00:08:38.699 14:49:11 -- nvmf/common.sh@155 -- # true 00:08:38.699 14:49:11 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:08:38.699 14:49:11 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:08:38.699 Cannot find device "nvmf_tgt_br" 00:08:38.699 14:49:11 -- nvmf/common.sh@157 -- # true 00:08:38.699 14:49:11 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:08:38.699 Cannot find device "nvmf_tgt_br2" 00:08:38.699 14:49:11 -- nvmf/common.sh@158 -- # true 00:08:38.699 14:49:11 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:08:38.699 14:49:11 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:08:38.699 14:49:11 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:08:38.699 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:38.699 14:49:11 -- nvmf/common.sh@161 -- # true 00:08:38.699 14:49:11 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:08:38.699 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:38.699 14:49:11 -- nvmf/common.sh@162 -- # true 00:08:38.699 14:49:11 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:08:38.699 14:49:11 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:08:38.699 14:49:11 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:08:38.699 14:49:11 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:08:38.699 14:49:11 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:08:38.699 14:49:11 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:08:38.699 14:49:11 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:08:38.699 14:49:11 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:08:38.699 14:49:11 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:08:38.699 14:49:11 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:08:38.699 14:49:11 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:08:38.699 14:49:11 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:08:38.699 14:49:11 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:08:38.699 14:49:11 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:08:38.699 14:49:11 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:08:38.699 14:49:11 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:08:38.699 14:49:11 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:08:38.699 14:49:11 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:08:38.699 14:49:11 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:08:38.699 14:49:11 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:08:38.699 14:49:11 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:08:38.699 14:49:11 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:08:38.699 14:49:11 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:08:38.699 14:49:11 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:08:38.699 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:38.699 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.084 ms 00:08:38.699 00:08:38.699 --- 10.0.0.2 ping statistics --- 00:08:38.699 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:38.699 rtt min/avg/max/mdev = 0.084/0.084/0.084/0.000 ms 00:08:38.699 14:49:11 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:08:38.699 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:08:38.699 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.057 ms 00:08:38.699 00:08:38.699 --- 10.0.0.3 ping statistics --- 00:08:38.699 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:38.699 rtt min/avg/max/mdev = 0.057/0.057/0.057/0.000 ms 00:08:38.699 14:49:11 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:08:38.962 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:38.962 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.023 ms 00:08:38.962 00:08:38.962 --- 10.0.0.1 ping statistics --- 00:08:38.962 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:38.962 rtt min/avg/max/mdev = 0.023/0.023/0.023/0.000 ms 00:08:38.962 14:49:11 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:38.962 14:49:11 -- nvmf/common.sh@421 -- # return 0 00:08:38.962 14:49:11 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:08:38.962 14:49:11 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:38.962 14:49:11 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:08:38.962 14:49:11 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:08:38.962 14:49:11 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:38.962 14:49:11 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:08:38.962 14:49:11 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:08:38.962 14:49:11 -- target/referrals.sh@38 -- # nvmfappstart -m 0xF 00:08:38.962 14:49:11 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:08:38.962 14:49:11 -- common/autotest_common.sh@722 -- # xtrace_disable 00:08:38.962 14:49:11 -- common/autotest_common.sh@10 -- # set +x 00:08:38.962 14:49:11 -- nvmf/common.sh@469 -- # nvmfpid=73703 00:08:38.962 14:49:11 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:08:38.962 14:49:11 -- nvmf/common.sh@470 -- # waitforlisten 73703 00:08:38.962 14:49:11 -- common/autotest_common.sh@829 -- # '[' -z 73703 ']' 00:08:38.962 14:49:11 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:38.962 14:49:11 -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:38.962 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:38.962 14:49:11 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:38.962 14:49:11 -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:38.962 14:49:11 -- common/autotest_common.sh@10 -- # set +x 00:08:38.962 [2024-12-01 14:49:11.904306] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:08:38.962 [2024-12-01 14:49:11.904389] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:38.962 [2024-12-01 14:49:12.043511] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:39.222 [2024-12-01 14:49:12.100933] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:08:39.222 [2024-12-01 14:49:12.101068] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:39.222 [2024-12-01 14:49:12.101080] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:39.222 [2024-12-01 14:49:12.101088] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:39.222 [2024-12-01 14:49:12.101239] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:08:39.222 [2024-12-01 14:49:12.101821] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:08:39.222 [2024-12-01 14:49:12.101913] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:08:39.222 [2024-12-01 14:49:12.101917] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:40.158 14:49:12 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:40.158 14:49:12 -- common/autotest_common.sh@862 -- # return 0 00:08:40.158 14:49:12 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:08:40.158 14:49:12 -- common/autotest_common.sh@728 -- # xtrace_disable 00:08:40.158 14:49:12 -- common/autotest_common.sh@10 -- # set +x 00:08:40.158 14:49:12 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:40.158 14:49:12 -- target/referrals.sh@40 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:40.158 14:49:12 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:40.158 14:49:12 -- common/autotest_common.sh@10 -- # set +x 00:08:40.158 [2024-12-01 14:49:12.972802] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:40.158 14:49:12 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:40.158 14:49:12 -- target/referrals.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 10.0.0.2 -s 8009 discovery 00:08:40.158 14:49:12 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:40.158 14:49:12 -- common/autotest_common.sh@10 -- # set +x 00:08:40.158 [2024-12-01 14:49:13.005361] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:08:40.159 14:49:13 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:40.159 14:49:13 -- target/referrals.sh@44 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 00:08:40.159 14:49:13 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:40.159 14:49:13 -- common/autotest_common.sh@10 -- # set +x 00:08:40.159 14:49:13 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:40.159 14:49:13 -- target/referrals.sh@45 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.3 -s 4430 00:08:40.159 14:49:13 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:40.159 14:49:13 -- common/autotest_common.sh@10 -- # set +x 00:08:40.159 14:49:13 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:40.159 14:49:13 -- target/referrals.sh@46 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.4 -s 4430 00:08:40.159 14:49:13 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:40.159 14:49:13 -- common/autotest_common.sh@10 -- # set +x 00:08:40.159 14:49:13 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:40.159 14:49:13 -- target/referrals.sh@48 -- # jq length 00:08:40.159 14:49:13 -- target/referrals.sh@48 -- # rpc_cmd nvmf_discovery_get_referrals 00:08:40.159 14:49:13 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:40.159 14:49:13 -- common/autotest_common.sh@10 -- # set +x 00:08:40.159 14:49:13 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:40.159 14:49:13 -- target/referrals.sh@48 -- # (( 3 == 3 )) 00:08:40.159 14:49:13 -- target/referrals.sh@49 -- # get_referral_ips rpc 00:08:40.159 14:49:13 -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:08:40.159 14:49:13 -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:08:40.159 14:49:13 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:40.159 14:49:13 -- common/autotest_common.sh@10 -- # set +x 00:08:40.159 14:49:13 -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:08:40.159 14:49:13 -- target/referrals.sh@21 -- # sort 00:08:40.159 14:49:13 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:40.159 14:49:13 -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:08:40.159 14:49:13 -- target/referrals.sh@49 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:08:40.159 14:49:13 -- target/referrals.sh@50 -- # get_referral_ips nvme 00:08:40.159 14:49:13 -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:08:40.159 14:49:13 -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:08:40.159 14:49:13 -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:2d843004-a791-47f3-8dd7-3d04462c368b --hostid=2d843004-a791-47f3-8dd7-3d04462c368b -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:40.159 14:49:13 -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:08:40.159 14:49:13 -- target/referrals.sh@26 -- # sort 00:08:40.159 14:49:13 -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:08:40.159 14:49:13 -- target/referrals.sh@50 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:08:40.159 14:49:13 -- target/referrals.sh@52 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 00:08:40.159 14:49:13 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:40.159 14:49:13 -- common/autotest_common.sh@10 -- # set +x 00:08:40.159 14:49:13 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:40.159 14:49:13 -- target/referrals.sh@53 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.3 -s 4430 00:08:40.159 14:49:13 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:40.159 14:49:13 -- common/autotest_common.sh@10 -- # set +x 00:08:40.420 14:49:13 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:40.420 14:49:13 -- target/referrals.sh@54 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.4 -s 4430 00:08:40.420 14:49:13 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:40.420 14:49:13 -- common/autotest_common.sh@10 -- # set +x 00:08:40.420 14:49:13 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:40.420 14:49:13 -- target/referrals.sh@56 -- # jq length 00:08:40.420 14:49:13 -- target/referrals.sh@56 -- # rpc_cmd nvmf_discovery_get_referrals 00:08:40.420 14:49:13 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:40.420 14:49:13 -- common/autotest_common.sh@10 -- # set +x 00:08:40.420 14:49:13 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:40.420 14:49:13 -- target/referrals.sh@56 -- # (( 0 == 0 )) 00:08:40.420 14:49:13 -- target/referrals.sh@57 -- # get_referral_ips nvme 00:08:40.420 14:49:13 -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:08:40.420 14:49:13 -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:08:40.420 14:49:13 -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:2d843004-a791-47f3-8dd7-3d04462c368b --hostid=2d843004-a791-47f3-8dd7-3d04462c368b -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:40.420 14:49:13 -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:08:40.420 14:49:13 -- target/referrals.sh@26 -- # sort 00:08:40.420 14:49:13 -- target/referrals.sh@26 -- # echo 00:08:40.420 14:49:13 -- target/referrals.sh@57 -- # [[ '' == '' ]] 00:08:40.420 14:49:13 -- target/referrals.sh@60 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n discovery 00:08:40.420 14:49:13 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:40.420 14:49:13 -- common/autotest_common.sh@10 -- # set +x 00:08:40.420 14:49:13 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:40.420 14:49:13 -- target/referrals.sh@62 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:08:40.420 14:49:13 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:40.420 14:49:13 -- common/autotest_common.sh@10 -- # set +x 00:08:40.420 14:49:13 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:40.420 14:49:13 -- target/referrals.sh@65 -- # get_referral_ips rpc 00:08:40.420 14:49:13 -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:08:40.420 14:49:13 -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:08:40.420 14:49:13 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:40.420 14:49:13 -- common/autotest_common.sh@10 -- # set +x 00:08:40.420 14:49:13 -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:08:40.420 14:49:13 -- target/referrals.sh@21 -- # sort 00:08:40.420 14:49:13 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:40.685 14:49:13 -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.2 00:08:40.685 14:49:13 -- target/referrals.sh@65 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:08:40.685 14:49:13 -- target/referrals.sh@66 -- # get_referral_ips nvme 00:08:40.685 14:49:13 -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:08:40.685 14:49:13 -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:08:40.685 14:49:13 -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:2d843004-a791-47f3-8dd7-3d04462c368b --hostid=2d843004-a791-47f3-8dd7-3d04462c368b -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:40.685 14:49:13 -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:08:40.685 14:49:13 -- target/referrals.sh@26 -- # sort 00:08:40.685 14:49:13 -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.2 00:08:40.685 14:49:13 -- target/referrals.sh@66 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:08:40.685 14:49:13 -- target/referrals.sh@67 -- # get_discovery_entries 'nvme subsystem' 00:08:40.685 14:49:13 -- target/referrals.sh@67 -- # jq -r .subnqn 00:08:40.685 14:49:13 -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:08:40.685 14:49:13 -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:08:40.685 14:49:13 -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:2d843004-a791-47f3-8dd7-3d04462c368b --hostid=2d843004-a791-47f3-8dd7-3d04462c368b -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:40.685 14:49:13 -- target/referrals.sh@67 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:08:40.685 14:49:13 -- target/referrals.sh@68 -- # get_discovery_entries 'discovery subsystem referral' 00:08:40.685 14:49:13 -- target/referrals.sh@68 -- # jq -r .subnqn 00:08:40.685 14:49:13 -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:08:40.685 14:49:13 -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:2d843004-a791-47f3-8dd7-3d04462c368b --hostid=2d843004-a791-47f3-8dd7-3d04462c368b -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:40.685 14:49:13 -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:08:40.949 14:49:13 -- target/referrals.sh@68 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:08:40.949 14:49:13 -- target/referrals.sh@71 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:08:40.949 14:49:13 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:40.949 14:49:13 -- common/autotest_common.sh@10 -- # set +x 00:08:40.949 14:49:13 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:40.949 14:49:13 -- target/referrals.sh@73 -- # get_referral_ips rpc 00:08:40.949 14:49:13 -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:08:40.949 14:49:13 -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:08:40.949 14:49:13 -- target/referrals.sh@21 -- # sort 00:08:40.949 14:49:13 -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:08:40.949 14:49:13 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:40.949 14:49:13 -- common/autotest_common.sh@10 -- # set +x 00:08:40.949 14:49:13 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:40.949 14:49:13 -- target/referrals.sh@21 -- # echo 127.0.0.2 00:08:40.949 14:49:13 -- target/referrals.sh@73 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:08:40.949 14:49:13 -- target/referrals.sh@74 -- # get_referral_ips nvme 00:08:40.949 14:49:13 -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:08:40.949 14:49:13 -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:08:40.949 14:49:13 -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:2d843004-a791-47f3-8dd7-3d04462c368b --hostid=2d843004-a791-47f3-8dd7-3d04462c368b -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:40.949 14:49:13 -- target/referrals.sh@26 -- # sort 00:08:40.949 14:49:13 -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:08:41.212 14:49:14 -- target/referrals.sh@26 -- # echo 127.0.0.2 00:08:41.212 14:49:14 -- target/referrals.sh@74 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:08:41.212 14:49:14 -- target/referrals.sh@75 -- # get_discovery_entries 'nvme subsystem' 00:08:41.212 14:49:14 -- target/referrals.sh@75 -- # jq -r .subnqn 00:08:41.212 14:49:14 -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:08:41.212 14:49:14 -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:2d843004-a791-47f3-8dd7-3d04462c368b --hostid=2d843004-a791-47f3-8dd7-3d04462c368b -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:41.212 14:49:14 -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:08:41.212 14:49:14 -- target/referrals.sh@75 -- # [[ '' == '' ]] 00:08:41.212 14:49:14 -- target/referrals.sh@76 -- # get_discovery_entries 'discovery subsystem referral' 00:08:41.212 14:49:14 -- target/referrals.sh@76 -- # jq -r .subnqn 00:08:41.212 14:49:14 -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:08:41.212 14:49:14 -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:2d843004-a791-47f3-8dd7-3d04462c368b --hostid=2d843004-a791-47f3-8dd7-3d04462c368b -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:41.212 14:49:14 -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:08:41.212 14:49:14 -- target/referrals.sh@76 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:08:41.212 14:49:14 -- target/referrals.sh@79 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2014-08.org.nvmexpress.discovery 00:08:41.212 14:49:14 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:41.212 14:49:14 -- common/autotest_common.sh@10 -- # set +x 00:08:41.212 14:49:14 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:41.212 14:49:14 -- target/referrals.sh@82 -- # rpc_cmd nvmf_discovery_get_referrals 00:08:41.212 14:49:14 -- target/referrals.sh@82 -- # jq length 00:08:41.212 14:49:14 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:41.212 14:49:14 -- common/autotest_common.sh@10 -- # set +x 00:08:41.212 14:49:14 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:41.470 14:49:14 -- target/referrals.sh@82 -- # (( 0 == 0 )) 00:08:41.470 14:49:14 -- target/referrals.sh@83 -- # get_referral_ips nvme 00:08:41.470 14:49:14 -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:08:41.470 14:49:14 -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:08:41.470 14:49:14 -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:08:41.470 14:49:14 -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:2d843004-a791-47f3-8dd7-3d04462c368b --hostid=2d843004-a791-47f3-8dd7-3d04462c368b -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:41.470 14:49:14 -- target/referrals.sh@26 -- # sort 00:08:41.470 14:49:14 -- target/referrals.sh@26 -- # echo 00:08:41.470 14:49:14 -- target/referrals.sh@83 -- # [[ '' == '' ]] 00:08:41.470 14:49:14 -- target/referrals.sh@85 -- # trap - SIGINT SIGTERM EXIT 00:08:41.470 14:49:14 -- target/referrals.sh@86 -- # nvmftestfini 00:08:41.470 14:49:14 -- nvmf/common.sh@476 -- # nvmfcleanup 00:08:41.470 14:49:14 -- nvmf/common.sh@116 -- # sync 00:08:41.470 14:49:14 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:08:41.470 14:49:14 -- nvmf/common.sh@119 -- # set +e 00:08:41.470 14:49:14 -- nvmf/common.sh@120 -- # for i in {1..20} 00:08:41.470 14:49:14 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:08:41.729 rmmod nvme_tcp 00:08:41.729 rmmod nvme_fabrics 00:08:41.729 rmmod nvme_keyring 00:08:41.729 14:49:14 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:08:41.729 14:49:14 -- nvmf/common.sh@123 -- # set -e 00:08:41.729 14:49:14 -- nvmf/common.sh@124 -- # return 0 00:08:41.729 14:49:14 -- nvmf/common.sh@477 -- # '[' -n 73703 ']' 00:08:41.729 14:49:14 -- nvmf/common.sh@478 -- # killprocess 73703 00:08:41.729 14:49:14 -- common/autotest_common.sh@936 -- # '[' -z 73703 ']' 00:08:41.729 14:49:14 -- common/autotest_common.sh@940 -- # kill -0 73703 00:08:41.729 14:49:14 -- common/autotest_common.sh@941 -- # uname 00:08:41.729 14:49:14 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:08:41.729 14:49:14 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 73703 00:08:41.729 14:49:14 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:08:41.729 14:49:14 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:08:41.729 killing process with pid 73703 00:08:41.729 14:49:14 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 73703' 00:08:41.729 14:49:14 -- common/autotest_common.sh@955 -- # kill 73703 00:08:41.729 14:49:14 -- common/autotest_common.sh@960 -- # wait 73703 00:08:41.989 14:49:14 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:08:41.989 14:49:14 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:08:41.989 14:49:14 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:08:41.989 14:49:14 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:41.989 14:49:14 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:08:41.989 14:49:14 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:41.989 14:49:14 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:41.989 14:49:14 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:41.989 14:49:14 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:08:41.989 00:08:41.989 real 0m3.600s 00:08:41.989 user 0m12.135s 00:08:41.989 sys 0m0.924s 00:08:41.989 14:49:14 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:41.989 14:49:14 -- common/autotest_common.sh@10 -- # set +x 00:08:41.989 ************************************ 00:08:41.989 END TEST nvmf_referrals 00:08:41.989 ************************************ 00:08:41.989 14:49:14 -- nvmf/nvmf.sh@27 -- # run_test nvmf_connect_disconnect /home/vagrant/spdk_repo/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:08:41.989 14:49:14 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:08:41.989 14:49:14 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:41.989 14:49:14 -- common/autotest_common.sh@10 -- # set +x 00:08:41.989 ************************************ 00:08:41.989 START TEST nvmf_connect_disconnect 00:08:41.989 ************************************ 00:08:41.989 14:49:14 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:08:41.989 * Looking for test storage... 00:08:41.989 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:08:41.989 14:49:15 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:08:41.989 14:49:15 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:08:41.989 14:49:15 -- common/autotest_common.sh@1690 -- # lcov --version 00:08:41.989 14:49:15 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:08:41.989 14:49:15 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:08:41.989 14:49:15 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:08:41.989 14:49:15 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:08:42.251 14:49:15 -- scripts/common.sh@335 -- # IFS=.-: 00:08:42.251 14:49:15 -- scripts/common.sh@335 -- # read -ra ver1 00:08:42.251 14:49:15 -- scripts/common.sh@336 -- # IFS=.-: 00:08:42.251 14:49:15 -- scripts/common.sh@336 -- # read -ra ver2 00:08:42.251 14:49:15 -- scripts/common.sh@337 -- # local 'op=<' 00:08:42.251 14:49:15 -- scripts/common.sh@339 -- # ver1_l=2 00:08:42.251 14:49:15 -- scripts/common.sh@340 -- # ver2_l=1 00:08:42.251 14:49:15 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:08:42.251 14:49:15 -- scripts/common.sh@343 -- # case "$op" in 00:08:42.251 14:49:15 -- scripts/common.sh@344 -- # : 1 00:08:42.251 14:49:15 -- scripts/common.sh@363 -- # (( v = 0 )) 00:08:42.251 14:49:15 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:42.251 14:49:15 -- scripts/common.sh@364 -- # decimal 1 00:08:42.251 14:49:15 -- scripts/common.sh@352 -- # local d=1 00:08:42.251 14:49:15 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:42.251 14:49:15 -- scripts/common.sh@354 -- # echo 1 00:08:42.251 14:49:15 -- scripts/common.sh@364 -- # ver1[v]=1 00:08:42.251 14:49:15 -- scripts/common.sh@365 -- # decimal 2 00:08:42.251 14:49:15 -- scripts/common.sh@352 -- # local d=2 00:08:42.251 14:49:15 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:42.251 14:49:15 -- scripts/common.sh@354 -- # echo 2 00:08:42.251 14:49:15 -- scripts/common.sh@365 -- # ver2[v]=2 00:08:42.251 14:49:15 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:08:42.251 14:49:15 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:08:42.251 14:49:15 -- scripts/common.sh@367 -- # return 0 00:08:42.251 14:49:15 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:42.251 14:49:15 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:08:42.251 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:42.251 --rc genhtml_branch_coverage=1 00:08:42.251 --rc genhtml_function_coverage=1 00:08:42.251 --rc genhtml_legend=1 00:08:42.251 --rc geninfo_all_blocks=1 00:08:42.251 --rc geninfo_unexecuted_blocks=1 00:08:42.251 00:08:42.251 ' 00:08:42.251 14:49:15 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:08:42.251 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:42.251 --rc genhtml_branch_coverage=1 00:08:42.251 --rc genhtml_function_coverage=1 00:08:42.251 --rc genhtml_legend=1 00:08:42.251 --rc geninfo_all_blocks=1 00:08:42.251 --rc geninfo_unexecuted_blocks=1 00:08:42.251 00:08:42.251 ' 00:08:42.251 14:49:15 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:08:42.251 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:42.251 --rc genhtml_branch_coverage=1 00:08:42.251 --rc genhtml_function_coverage=1 00:08:42.251 --rc genhtml_legend=1 00:08:42.251 --rc geninfo_all_blocks=1 00:08:42.251 --rc geninfo_unexecuted_blocks=1 00:08:42.251 00:08:42.251 ' 00:08:42.251 14:49:15 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:08:42.251 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:42.251 --rc genhtml_branch_coverage=1 00:08:42.251 --rc genhtml_function_coverage=1 00:08:42.251 --rc genhtml_legend=1 00:08:42.251 --rc geninfo_all_blocks=1 00:08:42.251 --rc geninfo_unexecuted_blocks=1 00:08:42.251 00:08:42.251 ' 00:08:42.251 14:49:15 -- target/connect_disconnect.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:08:42.251 14:49:15 -- nvmf/common.sh@7 -- # uname -s 00:08:42.251 14:49:15 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:42.251 14:49:15 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:42.251 14:49:15 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:42.251 14:49:15 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:42.251 14:49:15 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:42.251 14:49:15 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:42.251 14:49:15 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:42.251 14:49:15 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:42.251 14:49:15 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:42.251 14:49:15 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:42.251 14:49:15 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:2d843004-a791-47f3-8dd7-3d04462c368b 00:08:42.251 14:49:15 -- nvmf/common.sh@18 -- # NVME_HOSTID=2d843004-a791-47f3-8dd7-3d04462c368b 00:08:42.251 14:49:15 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:42.251 14:49:15 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:42.251 14:49:15 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:08:42.251 14:49:15 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:42.251 14:49:15 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:42.251 14:49:15 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:42.251 14:49:15 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:42.251 14:49:15 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:42.251 14:49:15 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:42.251 14:49:15 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:42.251 14:49:15 -- paths/export.sh@5 -- # export PATH 00:08:42.251 14:49:15 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:42.251 14:49:15 -- nvmf/common.sh@46 -- # : 0 00:08:42.251 14:49:15 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:08:42.251 14:49:15 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:08:42.251 14:49:15 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:08:42.251 14:49:15 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:42.251 14:49:15 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:42.251 14:49:15 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:08:42.251 14:49:15 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:08:42.251 14:49:15 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:08:42.251 14:49:15 -- target/connect_disconnect.sh@11 -- # MALLOC_BDEV_SIZE=64 00:08:42.251 14:49:15 -- target/connect_disconnect.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:08:42.251 14:49:15 -- target/connect_disconnect.sh@15 -- # nvmftestinit 00:08:42.251 14:49:15 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:08:42.251 14:49:15 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:42.251 14:49:15 -- nvmf/common.sh@436 -- # prepare_net_devs 00:08:42.251 14:49:15 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:08:42.251 14:49:15 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:08:42.251 14:49:15 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:42.251 14:49:15 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:42.251 14:49:15 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:42.251 14:49:15 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:08:42.251 14:49:15 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:08:42.251 14:49:15 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:08:42.251 14:49:15 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:08:42.251 14:49:15 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:08:42.251 14:49:15 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:08:42.251 14:49:15 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:42.251 14:49:15 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:42.251 14:49:15 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:08:42.251 14:49:15 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:08:42.251 14:49:15 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:08:42.251 14:49:15 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:08:42.251 14:49:15 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:08:42.251 14:49:15 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:42.251 14:49:15 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:08:42.251 14:49:15 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:08:42.251 14:49:15 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:08:42.251 14:49:15 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:08:42.251 14:49:15 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:08:42.251 14:49:15 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:08:42.251 Cannot find device "nvmf_tgt_br" 00:08:42.251 14:49:15 -- nvmf/common.sh@154 -- # true 00:08:42.251 14:49:15 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:08:42.251 Cannot find device "nvmf_tgt_br2" 00:08:42.251 14:49:15 -- nvmf/common.sh@155 -- # true 00:08:42.251 14:49:15 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:08:42.251 14:49:15 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:08:42.251 Cannot find device "nvmf_tgt_br" 00:08:42.251 14:49:15 -- nvmf/common.sh@157 -- # true 00:08:42.251 14:49:15 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:08:42.251 Cannot find device "nvmf_tgt_br2" 00:08:42.251 14:49:15 -- nvmf/common.sh@158 -- # true 00:08:42.252 14:49:15 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:08:42.252 14:49:15 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:08:42.252 14:49:15 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:08:42.252 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:42.252 14:49:15 -- nvmf/common.sh@161 -- # true 00:08:42.252 14:49:15 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:08:42.252 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:42.252 14:49:15 -- nvmf/common.sh@162 -- # true 00:08:42.252 14:49:15 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:08:42.252 14:49:15 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:08:42.252 14:49:15 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:08:42.252 14:49:15 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:08:42.252 14:49:15 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:08:42.252 14:49:15 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:08:42.510 14:49:15 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:08:42.510 14:49:15 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:08:42.510 14:49:15 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:08:42.510 14:49:15 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:08:42.510 14:49:15 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:08:42.510 14:49:15 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:08:42.510 14:49:15 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:08:42.510 14:49:15 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:08:42.510 14:49:15 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:08:42.510 14:49:15 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:08:42.510 14:49:15 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:08:42.510 14:49:15 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:08:42.510 14:49:15 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:08:42.510 14:49:15 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:08:42.510 14:49:15 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:08:42.510 14:49:15 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:08:42.510 14:49:15 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:08:42.510 14:49:15 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:08:42.510 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:42.510 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.072 ms 00:08:42.510 00:08:42.510 --- 10.0.0.2 ping statistics --- 00:08:42.510 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:42.510 rtt min/avg/max/mdev = 0.072/0.072/0.072/0.000 ms 00:08:42.510 14:49:15 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:08:42.510 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:08:42.510 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.038 ms 00:08:42.510 00:08:42.510 --- 10.0.0.3 ping statistics --- 00:08:42.510 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:42.510 rtt min/avg/max/mdev = 0.038/0.038/0.038/0.000 ms 00:08:42.510 14:49:15 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:08:42.510 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:42.510 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.027 ms 00:08:42.510 00:08:42.510 --- 10.0.0.1 ping statistics --- 00:08:42.510 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:42.510 rtt min/avg/max/mdev = 0.027/0.027/0.027/0.000 ms 00:08:42.510 14:49:15 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:42.510 14:49:15 -- nvmf/common.sh@421 -- # return 0 00:08:42.510 14:49:15 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:08:42.510 14:49:15 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:42.510 14:49:15 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:08:42.510 14:49:15 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:08:42.510 14:49:15 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:42.510 14:49:15 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:08:42.510 14:49:15 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:08:42.510 14:49:15 -- target/connect_disconnect.sh@16 -- # nvmfappstart -m 0xF 00:08:42.510 14:49:15 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:08:42.510 14:49:15 -- common/autotest_common.sh@722 -- # xtrace_disable 00:08:42.510 14:49:15 -- common/autotest_common.sh@10 -- # set +x 00:08:42.510 14:49:15 -- nvmf/common.sh@469 -- # nvmfpid=74024 00:08:42.510 14:49:15 -- nvmf/common.sh@470 -- # waitforlisten 74024 00:08:42.510 14:49:15 -- common/autotest_common.sh@829 -- # '[' -z 74024 ']' 00:08:42.510 14:49:15 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:08:42.510 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:42.510 14:49:15 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:42.510 14:49:15 -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:42.510 14:49:15 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:42.510 14:49:15 -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:42.510 14:49:15 -- common/autotest_common.sh@10 -- # set +x 00:08:42.510 [2024-12-01 14:49:15.592093] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:08:42.510 [2024-12-01 14:49:15.592193] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:42.769 [2024-12-01 14:49:15.731926] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:42.769 [2024-12-01 14:49:15.788862] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:08:42.769 [2024-12-01 14:49:15.789016] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:42.769 [2024-12-01 14:49:15.789028] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:42.769 [2024-12-01 14:49:15.789035] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:42.769 [2024-12-01 14:49:15.789446] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:08:42.769 [2024-12-01 14:49:15.789653] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:08:42.769 [2024-12-01 14:49:15.789798] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:08:42.769 [2024-12-01 14:49:15.789804] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:43.706 14:49:16 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:43.706 14:49:16 -- common/autotest_common.sh@862 -- # return 0 00:08:43.706 14:49:16 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:08:43.706 14:49:16 -- common/autotest_common.sh@728 -- # xtrace_disable 00:08:43.706 14:49:16 -- common/autotest_common.sh@10 -- # set +x 00:08:43.706 14:49:16 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:43.706 14:49:16 -- target/connect_disconnect.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:08:43.706 14:49:16 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:43.706 14:49:16 -- common/autotest_common.sh@10 -- # set +x 00:08:43.706 [2024-12-01 14:49:16.645956] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:43.706 14:49:16 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:43.706 14:49:16 -- target/connect_disconnect.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 00:08:43.706 14:49:16 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:43.706 14:49:16 -- common/autotest_common.sh@10 -- # set +x 00:08:43.706 14:49:16 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:43.706 14:49:16 -- target/connect_disconnect.sh@20 -- # bdev=Malloc0 00:08:43.706 14:49:16 -- target/connect_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:08:43.706 14:49:16 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:43.706 14:49:16 -- common/autotest_common.sh@10 -- # set +x 00:08:43.706 14:49:16 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:43.706 14:49:16 -- target/connect_disconnect.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:08:43.706 14:49:16 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:43.706 14:49:16 -- common/autotest_common.sh@10 -- # set +x 00:08:43.706 14:49:16 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:43.706 14:49:16 -- target/connect_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:43.706 14:49:16 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:43.706 14:49:16 -- common/autotest_common.sh@10 -- # set +x 00:08:43.706 [2024-12-01 14:49:16.711004] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:43.706 14:49:16 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:43.706 14:49:16 -- target/connect_disconnect.sh@26 -- # '[' 1 -eq 1 ']' 00:08:43.706 14:49:16 -- target/connect_disconnect.sh@27 -- # num_iterations=100 00:08:43.706 14:49:16 -- target/connect_disconnect.sh@29 -- # NVME_CONNECT='nvme connect -i 8' 00:08:43.706 14:49:16 -- target/connect_disconnect.sh@34 -- # set +x 00:08:46.242 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:48.148 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:50.685 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:52.591 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:55.168 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:57.069 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:59.603 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:01.506 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:04.036 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:05.941 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:08.477 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:11.011 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:12.914 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:15.449 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:17.353 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:19.945 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:21.849 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:24.381 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:26.284 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:28.816 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:31.348 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:33.261 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:35.792 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:37.695 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:40.231 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:42.134 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:44.664 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:46.564 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:49.096 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:50.999 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:53.531 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:55.436 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:57.969 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:59.967 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:02.515 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:04.417 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:06.949 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:08.851 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:11.385 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:13.912 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:15.813 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:18.347 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:20.251 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:22.790 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:24.695 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:27.229 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:29.132 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:31.666 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:33.566 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:36.101 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:38.004 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:40.619 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:42.523 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:45.058 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:46.963 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:49.494 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:51.395 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:53.925 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:55.824 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:58.354 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:00.888 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:02.789 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:05.352 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:07.255 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:09.790 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:11.695 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:14.229 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:16.134 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:18.693 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:20.690 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:23.224 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:25.128 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:27.662 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:30.197 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:32.102 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:34.637 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:36.539 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:39.065 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:40.967 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:43.499 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:45.403 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:47.937 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:49.841 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:52.374 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:54.275 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:56.811 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:58.741 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:01.292 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:03.194 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:05.727 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:07.631 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:10.166 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:12.063 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:14.593 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:16.495 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:19.028 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:20.932 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:23.464 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:25.368 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:27.902 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:27.902 14:53:00 -- target/connect_disconnect.sh@43 -- # trap - SIGINT SIGTERM EXIT 00:12:27.902 14:53:00 -- target/connect_disconnect.sh@45 -- # nvmftestfini 00:12:27.902 14:53:00 -- nvmf/common.sh@476 -- # nvmfcleanup 00:12:27.902 14:53:00 -- nvmf/common.sh@116 -- # sync 00:12:27.902 14:53:00 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:12:27.902 14:53:00 -- nvmf/common.sh@119 -- # set +e 00:12:27.902 14:53:00 -- nvmf/common.sh@120 -- # for i in {1..20} 00:12:27.902 14:53:00 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:12:27.902 rmmod nvme_tcp 00:12:27.902 rmmod nvme_fabrics 00:12:27.902 rmmod nvme_keyring 00:12:27.902 14:53:00 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:12:27.902 14:53:00 -- nvmf/common.sh@123 -- # set -e 00:12:27.902 14:53:00 -- nvmf/common.sh@124 -- # return 0 00:12:27.902 14:53:00 -- nvmf/common.sh@477 -- # '[' -n 74024 ']' 00:12:27.902 14:53:00 -- nvmf/common.sh@478 -- # killprocess 74024 00:12:27.902 14:53:00 -- common/autotest_common.sh@936 -- # '[' -z 74024 ']' 00:12:27.902 14:53:00 -- common/autotest_common.sh@940 -- # kill -0 74024 00:12:27.902 14:53:00 -- common/autotest_common.sh@941 -- # uname 00:12:27.902 14:53:00 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:12:27.902 14:53:00 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 74024 00:12:27.902 14:53:00 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:12:27.902 14:53:00 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:12:27.902 killing process with pid 74024 00:12:27.902 14:53:00 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 74024' 00:12:27.902 14:53:00 -- common/autotest_common.sh@955 -- # kill 74024 00:12:27.902 14:53:00 -- common/autotest_common.sh@960 -- # wait 74024 00:12:28.163 14:53:01 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:12:28.163 14:53:01 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:12:28.163 14:53:01 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:12:28.163 14:53:01 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:28.163 14:53:01 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:12:28.163 14:53:01 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:28.163 14:53:01 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:28.163 14:53:01 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:28.163 14:53:01 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:12:28.163 00:12:28.163 real 3m46.139s 00:12:28.163 user 14m45.064s 00:12:28.163 sys 0m17.974s 00:12:28.163 14:53:01 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:12:28.163 14:53:01 -- common/autotest_common.sh@10 -- # set +x 00:12:28.163 ************************************ 00:12:28.163 END TEST nvmf_connect_disconnect 00:12:28.163 ************************************ 00:12:28.163 14:53:01 -- nvmf/nvmf.sh@28 -- # run_test nvmf_multitarget /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:12:28.163 14:53:01 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:12:28.163 14:53:01 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:12:28.163 14:53:01 -- common/autotest_common.sh@10 -- # set +x 00:12:28.163 ************************************ 00:12:28.163 START TEST nvmf_multitarget 00:12:28.163 ************************************ 00:12:28.163 14:53:01 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:12:28.163 * Looking for test storage... 00:12:28.163 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:12:28.163 14:53:01 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:12:28.163 14:53:01 -- common/autotest_common.sh@1690 -- # lcov --version 00:12:28.163 14:53:01 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:12:28.423 14:53:01 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:12:28.423 14:53:01 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:12:28.423 14:53:01 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:12:28.423 14:53:01 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:12:28.423 14:53:01 -- scripts/common.sh@335 -- # IFS=.-: 00:12:28.423 14:53:01 -- scripts/common.sh@335 -- # read -ra ver1 00:12:28.423 14:53:01 -- scripts/common.sh@336 -- # IFS=.-: 00:12:28.423 14:53:01 -- scripts/common.sh@336 -- # read -ra ver2 00:12:28.423 14:53:01 -- scripts/common.sh@337 -- # local 'op=<' 00:12:28.423 14:53:01 -- scripts/common.sh@339 -- # ver1_l=2 00:12:28.423 14:53:01 -- scripts/common.sh@340 -- # ver2_l=1 00:12:28.423 14:53:01 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:12:28.423 14:53:01 -- scripts/common.sh@343 -- # case "$op" in 00:12:28.423 14:53:01 -- scripts/common.sh@344 -- # : 1 00:12:28.423 14:53:01 -- scripts/common.sh@363 -- # (( v = 0 )) 00:12:28.423 14:53:01 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:28.423 14:53:01 -- scripts/common.sh@364 -- # decimal 1 00:12:28.423 14:53:01 -- scripts/common.sh@352 -- # local d=1 00:12:28.423 14:53:01 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:28.423 14:53:01 -- scripts/common.sh@354 -- # echo 1 00:12:28.423 14:53:01 -- scripts/common.sh@364 -- # ver1[v]=1 00:12:28.423 14:53:01 -- scripts/common.sh@365 -- # decimal 2 00:12:28.423 14:53:01 -- scripts/common.sh@352 -- # local d=2 00:12:28.423 14:53:01 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:28.423 14:53:01 -- scripts/common.sh@354 -- # echo 2 00:12:28.423 14:53:01 -- scripts/common.sh@365 -- # ver2[v]=2 00:12:28.423 14:53:01 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:12:28.423 14:53:01 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:12:28.423 14:53:01 -- scripts/common.sh@367 -- # return 0 00:12:28.423 14:53:01 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:28.423 14:53:01 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:12:28.423 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:28.423 --rc genhtml_branch_coverage=1 00:12:28.423 --rc genhtml_function_coverage=1 00:12:28.423 --rc genhtml_legend=1 00:12:28.423 --rc geninfo_all_blocks=1 00:12:28.423 --rc geninfo_unexecuted_blocks=1 00:12:28.423 00:12:28.423 ' 00:12:28.423 14:53:01 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:12:28.423 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:28.423 --rc genhtml_branch_coverage=1 00:12:28.423 --rc genhtml_function_coverage=1 00:12:28.423 --rc genhtml_legend=1 00:12:28.423 --rc geninfo_all_blocks=1 00:12:28.423 --rc geninfo_unexecuted_blocks=1 00:12:28.423 00:12:28.423 ' 00:12:28.423 14:53:01 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:12:28.423 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:28.423 --rc genhtml_branch_coverage=1 00:12:28.423 --rc genhtml_function_coverage=1 00:12:28.423 --rc genhtml_legend=1 00:12:28.423 --rc geninfo_all_blocks=1 00:12:28.423 --rc geninfo_unexecuted_blocks=1 00:12:28.423 00:12:28.423 ' 00:12:28.423 14:53:01 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:12:28.423 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:28.423 --rc genhtml_branch_coverage=1 00:12:28.423 --rc genhtml_function_coverage=1 00:12:28.423 --rc genhtml_legend=1 00:12:28.423 --rc geninfo_all_blocks=1 00:12:28.423 --rc geninfo_unexecuted_blocks=1 00:12:28.423 00:12:28.423 ' 00:12:28.423 14:53:01 -- target/multitarget.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:12:28.423 14:53:01 -- nvmf/common.sh@7 -- # uname -s 00:12:28.423 14:53:01 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:28.423 14:53:01 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:28.423 14:53:01 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:28.423 14:53:01 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:28.423 14:53:01 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:28.423 14:53:01 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:28.423 14:53:01 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:28.423 14:53:01 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:28.423 14:53:01 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:28.423 14:53:01 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:28.423 14:53:01 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:2d843004-a791-47f3-8dd7-3d04462c368b 00:12:28.423 14:53:01 -- nvmf/common.sh@18 -- # NVME_HOSTID=2d843004-a791-47f3-8dd7-3d04462c368b 00:12:28.423 14:53:01 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:28.423 14:53:01 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:28.423 14:53:01 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:12:28.423 14:53:01 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:12:28.423 14:53:01 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:28.423 14:53:01 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:28.423 14:53:01 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:28.423 14:53:01 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:28.423 14:53:01 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:28.423 14:53:01 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:28.423 14:53:01 -- paths/export.sh@5 -- # export PATH 00:12:28.423 14:53:01 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:28.423 14:53:01 -- nvmf/common.sh@46 -- # : 0 00:12:28.424 14:53:01 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:12:28.424 14:53:01 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:12:28.424 14:53:01 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:12:28.424 14:53:01 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:28.424 14:53:01 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:28.424 14:53:01 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:12:28.424 14:53:01 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:12:28.424 14:53:01 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:12:28.424 14:53:01 -- target/multitarget.sh@13 -- # rpc_py=/home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py 00:12:28.424 14:53:01 -- target/multitarget.sh@15 -- # nvmftestinit 00:12:28.424 14:53:01 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:12:28.424 14:53:01 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:28.424 14:53:01 -- nvmf/common.sh@436 -- # prepare_net_devs 00:12:28.424 14:53:01 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:12:28.424 14:53:01 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:12:28.424 14:53:01 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:28.424 14:53:01 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:28.424 14:53:01 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:28.424 14:53:01 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:12:28.424 14:53:01 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:12:28.424 14:53:01 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:12:28.424 14:53:01 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:12:28.424 14:53:01 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:12:28.424 14:53:01 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:12:28.424 14:53:01 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:28.424 14:53:01 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:28.424 14:53:01 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:12:28.424 14:53:01 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:12:28.424 14:53:01 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:12:28.424 14:53:01 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:12:28.424 14:53:01 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:12:28.424 14:53:01 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:28.424 14:53:01 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:12:28.424 14:53:01 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:12:28.424 14:53:01 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:12:28.424 14:53:01 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:12:28.424 14:53:01 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:12:28.424 14:53:01 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:12:28.424 Cannot find device "nvmf_tgt_br" 00:12:28.424 14:53:01 -- nvmf/common.sh@154 -- # true 00:12:28.424 14:53:01 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:12:28.424 Cannot find device "nvmf_tgt_br2" 00:12:28.424 14:53:01 -- nvmf/common.sh@155 -- # true 00:12:28.424 14:53:01 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:12:28.424 14:53:01 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:12:28.424 Cannot find device "nvmf_tgt_br" 00:12:28.424 14:53:01 -- nvmf/common.sh@157 -- # true 00:12:28.424 14:53:01 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:12:28.424 Cannot find device "nvmf_tgt_br2" 00:12:28.424 14:53:01 -- nvmf/common.sh@158 -- # true 00:12:28.424 14:53:01 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:12:28.424 14:53:01 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:12:28.424 14:53:01 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:12:28.424 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:12:28.424 14:53:01 -- nvmf/common.sh@161 -- # true 00:12:28.424 14:53:01 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:12:28.424 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:12:28.424 14:53:01 -- nvmf/common.sh@162 -- # true 00:12:28.424 14:53:01 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:12:28.424 14:53:01 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:12:28.424 14:53:01 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:12:28.424 14:53:01 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:12:28.684 14:53:01 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:12:28.684 14:53:01 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:12:28.684 14:53:01 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:12:28.684 14:53:01 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:12:28.684 14:53:01 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:12:28.684 14:53:01 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:12:28.684 14:53:01 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:12:28.684 14:53:01 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:12:28.684 14:53:01 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:12:28.684 14:53:01 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:12:28.684 14:53:01 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:12:28.684 14:53:01 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:12:28.684 14:53:01 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:12:28.684 14:53:01 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:12:28.684 14:53:01 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:12:28.684 14:53:01 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:12:28.684 14:53:01 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:12:28.684 14:53:01 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:12:28.684 14:53:01 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:12:28.684 14:53:01 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:12:28.684 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:28.684 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.096 ms 00:12:28.684 00:12:28.684 --- 10.0.0.2 ping statistics --- 00:12:28.684 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:28.684 rtt min/avg/max/mdev = 0.096/0.096/0.096/0.000 ms 00:12:28.684 14:53:01 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:12:28.684 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:12:28.684 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.057 ms 00:12:28.684 00:12:28.684 --- 10.0.0.3 ping statistics --- 00:12:28.684 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:28.684 rtt min/avg/max/mdev = 0.057/0.057/0.057/0.000 ms 00:12:28.684 14:53:01 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:12:28.684 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:28.684 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.029 ms 00:12:28.684 00:12:28.684 --- 10.0.0.1 ping statistics --- 00:12:28.684 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:28.684 rtt min/avg/max/mdev = 0.029/0.029/0.029/0.000 ms 00:12:28.684 14:53:01 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:28.684 14:53:01 -- nvmf/common.sh@421 -- # return 0 00:12:28.684 14:53:01 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:12:28.684 14:53:01 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:28.684 14:53:01 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:12:28.684 14:53:01 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:12:28.684 14:53:01 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:28.684 14:53:01 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:12:28.684 14:53:01 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:12:28.684 14:53:01 -- target/multitarget.sh@16 -- # nvmfappstart -m 0xF 00:12:28.684 14:53:01 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:12:28.684 14:53:01 -- common/autotest_common.sh@722 -- # xtrace_disable 00:12:28.684 14:53:01 -- common/autotest_common.sh@10 -- # set +x 00:12:28.684 14:53:01 -- nvmf/common.sh@469 -- # nvmfpid=77817 00:12:28.684 14:53:01 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:28.684 14:53:01 -- nvmf/common.sh@470 -- # waitforlisten 77817 00:12:28.684 14:53:01 -- common/autotest_common.sh@829 -- # '[' -z 77817 ']' 00:12:28.684 14:53:01 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:28.684 14:53:01 -- common/autotest_common.sh@834 -- # local max_retries=100 00:12:28.684 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:28.684 14:53:01 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:28.684 14:53:01 -- common/autotest_common.sh@838 -- # xtrace_disable 00:12:28.684 14:53:01 -- common/autotest_common.sh@10 -- # set +x 00:12:28.684 [2024-12-01 14:53:01.794314] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:12:28.684 [2024-12-01 14:53:01.795087] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:28.943 [2024-12-01 14:53:01.932425] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:28.943 [2024-12-01 14:53:01.985362] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:12:28.943 [2024-12-01 14:53:01.985498] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:28.943 [2024-12-01 14:53:01.985510] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:28.943 [2024-12-01 14:53:01.985518] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:28.943 [2024-12-01 14:53:01.985669] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:12:28.943 [2024-12-01 14:53:01.985836] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:12:28.943 [2024-12-01 14:53:01.986522] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:12:28.943 [2024-12-01 14:53:01.986551] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:29.880 14:53:02 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:12:29.880 14:53:02 -- common/autotest_common.sh@862 -- # return 0 00:12:29.880 14:53:02 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:12:29.880 14:53:02 -- common/autotest_common.sh@728 -- # xtrace_disable 00:12:29.880 14:53:02 -- common/autotest_common.sh@10 -- # set +x 00:12:29.880 14:53:02 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:29.880 14:53:02 -- target/multitarget.sh@18 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:12:29.880 14:53:02 -- target/multitarget.sh@21 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:12:29.880 14:53:02 -- target/multitarget.sh@21 -- # jq length 00:12:29.880 14:53:02 -- target/multitarget.sh@21 -- # '[' 1 '!=' 1 ']' 00:12:29.880 14:53:02 -- target/multitarget.sh@25 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_1 -s 32 00:12:30.140 "nvmf_tgt_1" 00:12:30.140 14:53:03 -- target/multitarget.sh@26 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_2 -s 32 00:12:30.140 "nvmf_tgt_2" 00:12:30.140 14:53:03 -- target/multitarget.sh@28 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:12:30.140 14:53:03 -- target/multitarget.sh@28 -- # jq length 00:12:30.399 14:53:03 -- target/multitarget.sh@28 -- # '[' 3 '!=' 3 ']' 00:12:30.399 14:53:03 -- target/multitarget.sh@32 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_1 00:12:30.399 true 00:12:30.399 14:53:03 -- target/multitarget.sh@33 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_2 00:12:30.659 true 00:12:30.659 14:53:03 -- target/multitarget.sh@35 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:12:30.659 14:53:03 -- target/multitarget.sh@35 -- # jq length 00:12:30.659 14:53:03 -- target/multitarget.sh@35 -- # '[' 1 '!=' 1 ']' 00:12:30.659 14:53:03 -- target/multitarget.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:12:30.659 14:53:03 -- target/multitarget.sh@41 -- # nvmftestfini 00:12:30.659 14:53:03 -- nvmf/common.sh@476 -- # nvmfcleanup 00:12:30.659 14:53:03 -- nvmf/common.sh@116 -- # sync 00:12:30.918 14:53:03 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:12:30.918 14:53:03 -- nvmf/common.sh@119 -- # set +e 00:12:30.918 14:53:03 -- nvmf/common.sh@120 -- # for i in {1..20} 00:12:30.919 14:53:03 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:12:30.919 rmmod nvme_tcp 00:12:30.919 rmmod nvme_fabrics 00:12:30.919 rmmod nvme_keyring 00:12:30.919 14:53:03 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:12:30.919 14:53:03 -- nvmf/common.sh@123 -- # set -e 00:12:30.919 14:53:03 -- nvmf/common.sh@124 -- # return 0 00:12:30.919 14:53:03 -- nvmf/common.sh@477 -- # '[' -n 77817 ']' 00:12:30.919 14:53:03 -- nvmf/common.sh@478 -- # killprocess 77817 00:12:30.919 14:53:03 -- common/autotest_common.sh@936 -- # '[' -z 77817 ']' 00:12:30.919 14:53:03 -- common/autotest_common.sh@940 -- # kill -0 77817 00:12:30.919 14:53:03 -- common/autotest_common.sh@941 -- # uname 00:12:30.919 14:53:03 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:12:30.919 14:53:03 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 77817 00:12:30.919 14:53:03 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:12:30.919 14:53:03 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:12:30.919 killing process with pid 77817 00:12:30.919 14:53:03 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 77817' 00:12:30.919 14:53:03 -- common/autotest_common.sh@955 -- # kill 77817 00:12:30.919 14:53:03 -- common/autotest_common.sh@960 -- # wait 77817 00:12:31.178 14:53:04 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:12:31.178 14:53:04 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:12:31.178 14:53:04 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:12:31.178 14:53:04 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:31.178 14:53:04 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:12:31.178 14:53:04 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:31.178 14:53:04 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:31.178 14:53:04 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:31.178 14:53:04 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:12:31.178 00:12:31.178 real 0m3.009s 00:12:31.178 user 0m9.866s 00:12:31.178 sys 0m0.708s 00:12:31.178 14:53:04 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:12:31.178 14:53:04 -- common/autotest_common.sh@10 -- # set +x 00:12:31.178 ************************************ 00:12:31.178 END TEST nvmf_multitarget 00:12:31.178 ************************************ 00:12:31.178 14:53:04 -- nvmf/nvmf.sh@29 -- # run_test nvmf_rpc /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:12:31.178 14:53:04 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:12:31.178 14:53:04 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:12:31.178 14:53:04 -- common/autotest_common.sh@10 -- # set +x 00:12:31.178 ************************************ 00:12:31.178 START TEST nvmf_rpc 00:12:31.178 ************************************ 00:12:31.178 14:53:04 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:12:31.178 * Looking for test storage... 00:12:31.178 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:12:31.178 14:53:04 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:12:31.178 14:53:04 -- common/autotest_common.sh@1690 -- # lcov --version 00:12:31.178 14:53:04 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:12:31.437 14:53:04 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:12:31.437 14:53:04 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:12:31.437 14:53:04 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:12:31.437 14:53:04 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:12:31.437 14:53:04 -- scripts/common.sh@335 -- # IFS=.-: 00:12:31.437 14:53:04 -- scripts/common.sh@335 -- # read -ra ver1 00:12:31.437 14:53:04 -- scripts/common.sh@336 -- # IFS=.-: 00:12:31.437 14:53:04 -- scripts/common.sh@336 -- # read -ra ver2 00:12:31.437 14:53:04 -- scripts/common.sh@337 -- # local 'op=<' 00:12:31.437 14:53:04 -- scripts/common.sh@339 -- # ver1_l=2 00:12:31.437 14:53:04 -- scripts/common.sh@340 -- # ver2_l=1 00:12:31.437 14:53:04 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:12:31.437 14:53:04 -- scripts/common.sh@343 -- # case "$op" in 00:12:31.437 14:53:04 -- scripts/common.sh@344 -- # : 1 00:12:31.437 14:53:04 -- scripts/common.sh@363 -- # (( v = 0 )) 00:12:31.437 14:53:04 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:31.437 14:53:04 -- scripts/common.sh@364 -- # decimal 1 00:12:31.437 14:53:04 -- scripts/common.sh@352 -- # local d=1 00:12:31.437 14:53:04 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:31.437 14:53:04 -- scripts/common.sh@354 -- # echo 1 00:12:31.437 14:53:04 -- scripts/common.sh@364 -- # ver1[v]=1 00:12:31.437 14:53:04 -- scripts/common.sh@365 -- # decimal 2 00:12:31.437 14:53:04 -- scripts/common.sh@352 -- # local d=2 00:12:31.437 14:53:04 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:31.437 14:53:04 -- scripts/common.sh@354 -- # echo 2 00:12:31.437 14:53:04 -- scripts/common.sh@365 -- # ver2[v]=2 00:12:31.437 14:53:04 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:12:31.437 14:53:04 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:12:31.437 14:53:04 -- scripts/common.sh@367 -- # return 0 00:12:31.437 14:53:04 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:31.437 14:53:04 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:12:31.437 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:31.437 --rc genhtml_branch_coverage=1 00:12:31.437 --rc genhtml_function_coverage=1 00:12:31.437 --rc genhtml_legend=1 00:12:31.437 --rc geninfo_all_blocks=1 00:12:31.437 --rc geninfo_unexecuted_blocks=1 00:12:31.437 00:12:31.437 ' 00:12:31.437 14:53:04 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:12:31.437 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:31.437 --rc genhtml_branch_coverage=1 00:12:31.437 --rc genhtml_function_coverage=1 00:12:31.437 --rc genhtml_legend=1 00:12:31.437 --rc geninfo_all_blocks=1 00:12:31.437 --rc geninfo_unexecuted_blocks=1 00:12:31.437 00:12:31.437 ' 00:12:31.437 14:53:04 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:12:31.437 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:31.437 --rc genhtml_branch_coverage=1 00:12:31.437 --rc genhtml_function_coverage=1 00:12:31.437 --rc genhtml_legend=1 00:12:31.437 --rc geninfo_all_blocks=1 00:12:31.437 --rc geninfo_unexecuted_blocks=1 00:12:31.437 00:12:31.437 ' 00:12:31.437 14:53:04 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:12:31.437 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:31.437 --rc genhtml_branch_coverage=1 00:12:31.437 --rc genhtml_function_coverage=1 00:12:31.437 --rc genhtml_legend=1 00:12:31.437 --rc geninfo_all_blocks=1 00:12:31.437 --rc geninfo_unexecuted_blocks=1 00:12:31.437 00:12:31.437 ' 00:12:31.437 14:53:04 -- target/rpc.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:12:31.437 14:53:04 -- nvmf/common.sh@7 -- # uname -s 00:12:31.437 14:53:04 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:31.437 14:53:04 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:31.437 14:53:04 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:31.437 14:53:04 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:31.437 14:53:04 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:31.438 14:53:04 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:31.438 14:53:04 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:31.438 14:53:04 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:31.438 14:53:04 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:31.438 14:53:04 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:31.438 14:53:04 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:2d843004-a791-47f3-8dd7-3d04462c368b 00:12:31.438 14:53:04 -- nvmf/common.sh@18 -- # NVME_HOSTID=2d843004-a791-47f3-8dd7-3d04462c368b 00:12:31.438 14:53:04 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:31.438 14:53:04 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:31.438 14:53:04 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:12:31.438 14:53:04 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:12:31.438 14:53:04 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:31.438 14:53:04 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:31.438 14:53:04 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:31.438 14:53:04 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:31.438 14:53:04 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:31.438 14:53:04 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:31.438 14:53:04 -- paths/export.sh@5 -- # export PATH 00:12:31.438 14:53:04 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:31.438 14:53:04 -- nvmf/common.sh@46 -- # : 0 00:12:31.438 14:53:04 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:12:31.438 14:53:04 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:12:31.438 14:53:04 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:12:31.438 14:53:04 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:31.438 14:53:04 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:31.438 14:53:04 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:12:31.438 14:53:04 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:12:31.438 14:53:04 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:12:31.438 14:53:04 -- target/rpc.sh@11 -- # loops=5 00:12:31.438 14:53:04 -- target/rpc.sh@23 -- # nvmftestinit 00:12:31.438 14:53:04 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:12:31.438 14:53:04 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:31.438 14:53:04 -- nvmf/common.sh@436 -- # prepare_net_devs 00:12:31.438 14:53:04 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:12:31.438 14:53:04 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:12:31.438 14:53:04 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:31.438 14:53:04 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:31.438 14:53:04 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:31.438 14:53:04 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:12:31.438 14:53:04 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:12:31.438 14:53:04 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:12:31.438 14:53:04 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:12:31.438 14:53:04 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:12:31.438 14:53:04 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:12:31.438 14:53:04 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:31.438 14:53:04 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:31.438 14:53:04 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:12:31.438 14:53:04 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:12:31.438 14:53:04 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:12:31.438 14:53:04 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:12:31.438 14:53:04 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:12:31.438 14:53:04 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:31.438 14:53:04 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:12:31.438 14:53:04 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:12:31.438 14:53:04 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:12:31.438 14:53:04 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:12:31.438 14:53:04 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:12:31.438 14:53:04 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:12:31.438 Cannot find device "nvmf_tgt_br" 00:12:31.438 14:53:04 -- nvmf/common.sh@154 -- # true 00:12:31.438 14:53:04 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:12:31.438 Cannot find device "nvmf_tgt_br2" 00:12:31.438 14:53:04 -- nvmf/common.sh@155 -- # true 00:12:31.438 14:53:04 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:12:31.438 14:53:04 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:12:31.438 Cannot find device "nvmf_tgt_br" 00:12:31.438 14:53:04 -- nvmf/common.sh@157 -- # true 00:12:31.438 14:53:04 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:12:31.438 Cannot find device "nvmf_tgt_br2" 00:12:31.438 14:53:04 -- nvmf/common.sh@158 -- # true 00:12:31.438 14:53:04 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:12:31.438 14:53:04 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:12:31.438 14:53:04 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:12:31.438 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:12:31.438 14:53:04 -- nvmf/common.sh@161 -- # true 00:12:31.438 14:53:04 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:12:31.438 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:12:31.438 14:53:04 -- nvmf/common.sh@162 -- # true 00:12:31.438 14:53:04 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:12:31.438 14:53:04 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:12:31.438 14:53:04 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:12:31.697 14:53:04 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:12:31.697 14:53:04 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:12:31.697 14:53:04 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:12:31.697 14:53:04 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:12:31.697 14:53:04 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:12:31.697 14:53:04 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:12:31.697 14:53:04 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:12:31.697 14:53:04 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:12:31.697 14:53:04 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:12:31.697 14:53:04 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:12:31.697 14:53:04 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:12:31.697 14:53:04 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:12:31.697 14:53:04 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:12:31.697 14:53:04 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:12:31.697 14:53:04 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:12:31.697 14:53:04 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:12:31.697 14:53:04 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:12:31.697 14:53:04 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:12:31.697 14:53:04 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:12:31.697 14:53:04 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:12:31.697 14:53:04 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:12:31.697 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:31.697 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.086 ms 00:12:31.697 00:12:31.697 --- 10.0.0.2 ping statistics --- 00:12:31.697 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:31.697 rtt min/avg/max/mdev = 0.086/0.086/0.086/0.000 ms 00:12:31.697 14:53:04 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:12:31.697 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:12:31.697 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.049 ms 00:12:31.697 00:12:31.697 --- 10.0.0.3 ping statistics --- 00:12:31.697 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:31.697 rtt min/avg/max/mdev = 0.049/0.049/0.049/0.000 ms 00:12:31.697 14:53:04 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:12:31.697 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:31.697 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.029 ms 00:12:31.697 00:12:31.697 --- 10.0.0.1 ping statistics --- 00:12:31.697 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:31.697 rtt min/avg/max/mdev = 0.029/0.029/0.029/0.000 ms 00:12:31.697 14:53:04 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:31.697 14:53:04 -- nvmf/common.sh@421 -- # return 0 00:12:31.697 14:53:04 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:12:31.697 14:53:04 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:31.697 14:53:04 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:12:31.697 14:53:04 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:12:31.697 14:53:04 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:31.697 14:53:04 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:12:31.697 14:53:04 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:12:31.697 14:53:04 -- target/rpc.sh@24 -- # nvmfappstart -m 0xF 00:12:31.697 14:53:04 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:12:31.697 14:53:04 -- common/autotest_common.sh@722 -- # xtrace_disable 00:12:31.697 14:53:04 -- common/autotest_common.sh@10 -- # set +x 00:12:31.697 14:53:04 -- nvmf/common.sh@469 -- # nvmfpid=78056 00:12:31.697 14:53:04 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:31.697 14:53:04 -- nvmf/common.sh@470 -- # waitforlisten 78056 00:12:31.697 14:53:04 -- common/autotest_common.sh@829 -- # '[' -z 78056 ']' 00:12:31.697 14:53:04 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:31.697 14:53:04 -- common/autotest_common.sh@834 -- # local max_retries=100 00:12:31.697 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:31.697 14:53:04 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:31.697 14:53:04 -- common/autotest_common.sh@838 -- # xtrace_disable 00:12:31.697 14:53:04 -- common/autotest_common.sh@10 -- # set +x 00:12:31.956 [2024-12-01 14:53:04.812601] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:12:31.956 [2024-12-01 14:53:04.812692] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:31.956 [2024-12-01 14:53:04.951805] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:31.956 [2024-12-01 14:53:05.005549] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:12:31.956 [2024-12-01 14:53:05.005976] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:31.956 [2024-12-01 14:53:05.006037] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:31.956 [2024-12-01 14:53:05.006303] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:31.956 [2024-12-01 14:53:05.006473] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:12:31.956 [2024-12-01 14:53:05.006610] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:12:31.956 [2024-12-01 14:53:05.007270] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:12:31.956 [2024-12-01 14:53:05.007304] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:32.890 14:53:05 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:12:32.890 14:53:05 -- common/autotest_common.sh@862 -- # return 0 00:12:32.890 14:53:05 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:12:32.890 14:53:05 -- common/autotest_common.sh@728 -- # xtrace_disable 00:12:32.890 14:53:05 -- common/autotest_common.sh@10 -- # set +x 00:12:32.890 14:53:05 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:32.890 14:53:05 -- target/rpc.sh@26 -- # rpc_cmd nvmf_get_stats 00:12:32.890 14:53:05 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:32.890 14:53:05 -- common/autotest_common.sh@10 -- # set +x 00:12:32.890 14:53:05 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:32.890 14:53:05 -- target/rpc.sh@26 -- # stats='{ 00:12:32.890 "poll_groups": [ 00:12:32.890 { 00:12:32.890 "admin_qpairs": 0, 00:12:32.890 "completed_nvme_io": 0, 00:12:32.890 "current_admin_qpairs": 0, 00:12:32.890 "current_io_qpairs": 0, 00:12:32.890 "io_qpairs": 0, 00:12:32.890 "name": "nvmf_tgt_poll_group_0", 00:12:32.890 "pending_bdev_io": 0, 00:12:32.890 "transports": [] 00:12:32.890 }, 00:12:32.890 { 00:12:32.890 "admin_qpairs": 0, 00:12:32.890 "completed_nvme_io": 0, 00:12:32.890 "current_admin_qpairs": 0, 00:12:32.890 "current_io_qpairs": 0, 00:12:32.890 "io_qpairs": 0, 00:12:32.890 "name": "nvmf_tgt_poll_group_1", 00:12:32.890 "pending_bdev_io": 0, 00:12:32.890 "transports": [] 00:12:32.890 }, 00:12:32.890 { 00:12:32.890 "admin_qpairs": 0, 00:12:32.890 "completed_nvme_io": 0, 00:12:32.890 "current_admin_qpairs": 0, 00:12:32.890 "current_io_qpairs": 0, 00:12:32.890 "io_qpairs": 0, 00:12:32.890 "name": "nvmf_tgt_poll_group_2", 00:12:32.890 "pending_bdev_io": 0, 00:12:32.890 "transports": [] 00:12:32.890 }, 00:12:32.890 { 00:12:32.890 "admin_qpairs": 0, 00:12:32.890 "completed_nvme_io": 0, 00:12:32.890 "current_admin_qpairs": 0, 00:12:32.890 "current_io_qpairs": 0, 00:12:32.890 "io_qpairs": 0, 00:12:32.890 "name": "nvmf_tgt_poll_group_3", 00:12:32.890 "pending_bdev_io": 0, 00:12:32.890 "transports": [] 00:12:32.890 } 00:12:32.890 ], 00:12:32.890 "tick_rate": 2200000000 00:12:32.890 }' 00:12:32.890 14:53:05 -- target/rpc.sh@28 -- # jcount '.poll_groups[].name' 00:12:32.890 14:53:05 -- target/rpc.sh@14 -- # local 'filter=.poll_groups[].name' 00:12:32.890 14:53:05 -- target/rpc.sh@15 -- # jq '.poll_groups[].name' 00:12:32.890 14:53:05 -- target/rpc.sh@15 -- # wc -l 00:12:32.890 14:53:05 -- target/rpc.sh@28 -- # (( 4 == 4 )) 00:12:32.890 14:53:05 -- target/rpc.sh@29 -- # jq '.poll_groups[0].transports[0]' 00:12:33.149 14:53:06 -- target/rpc.sh@29 -- # [[ null == null ]] 00:12:33.149 14:53:06 -- target/rpc.sh@31 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:12:33.149 14:53:06 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:33.149 14:53:06 -- common/autotest_common.sh@10 -- # set +x 00:12:33.149 [2024-12-01 14:53:06.015647] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:33.149 14:53:06 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:33.149 14:53:06 -- target/rpc.sh@33 -- # rpc_cmd nvmf_get_stats 00:12:33.149 14:53:06 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:33.149 14:53:06 -- common/autotest_common.sh@10 -- # set +x 00:12:33.149 14:53:06 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:33.149 14:53:06 -- target/rpc.sh@33 -- # stats='{ 00:12:33.149 "poll_groups": [ 00:12:33.149 { 00:12:33.149 "admin_qpairs": 0, 00:12:33.149 "completed_nvme_io": 0, 00:12:33.149 "current_admin_qpairs": 0, 00:12:33.150 "current_io_qpairs": 0, 00:12:33.150 "io_qpairs": 0, 00:12:33.150 "name": "nvmf_tgt_poll_group_0", 00:12:33.150 "pending_bdev_io": 0, 00:12:33.150 "transports": [ 00:12:33.150 { 00:12:33.150 "trtype": "TCP" 00:12:33.150 } 00:12:33.150 ] 00:12:33.150 }, 00:12:33.150 { 00:12:33.150 "admin_qpairs": 0, 00:12:33.150 "completed_nvme_io": 0, 00:12:33.150 "current_admin_qpairs": 0, 00:12:33.150 "current_io_qpairs": 0, 00:12:33.150 "io_qpairs": 0, 00:12:33.150 "name": "nvmf_tgt_poll_group_1", 00:12:33.150 "pending_bdev_io": 0, 00:12:33.150 "transports": [ 00:12:33.150 { 00:12:33.150 "trtype": "TCP" 00:12:33.150 } 00:12:33.150 ] 00:12:33.150 }, 00:12:33.150 { 00:12:33.150 "admin_qpairs": 0, 00:12:33.150 "completed_nvme_io": 0, 00:12:33.150 "current_admin_qpairs": 0, 00:12:33.150 "current_io_qpairs": 0, 00:12:33.150 "io_qpairs": 0, 00:12:33.150 "name": "nvmf_tgt_poll_group_2", 00:12:33.150 "pending_bdev_io": 0, 00:12:33.150 "transports": [ 00:12:33.150 { 00:12:33.150 "trtype": "TCP" 00:12:33.150 } 00:12:33.150 ] 00:12:33.150 }, 00:12:33.150 { 00:12:33.150 "admin_qpairs": 0, 00:12:33.150 "completed_nvme_io": 0, 00:12:33.150 "current_admin_qpairs": 0, 00:12:33.150 "current_io_qpairs": 0, 00:12:33.150 "io_qpairs": 0, 00:12:33.150 "name": "nvmf_tgt_poll_group_3", 00:12:33.150 "pending_bdev_io": 0, 00:12:33.150 "transports": [ 00:12:33.150 { 00:12:33.150 "trtype": "TCP" 00:12:33.150 } 00:12:33.150 ] 00:12:33.150 } 00:12:33.150 ], 00:12:33.150 "tick_rate": 2200000000 00:12:33.150 }' 00:12:33.150 14:53:06 -- target/rpc.sh@35 -- # jsum '.poll_groups[].admin_qpairs' 00:12:33.150 14:53:06 -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:12:33.150 14:53:06 -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:12:33.150 14:53:06 -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:12:33.150 14:53:06 -- target/rpc.sh@35 -- # (( 0 == 0 )) 00:12:33.150 14:53:06 -- target/rpc.sh@36 -- # jsum '.poll_groups[].io_qpairs' 00:12:33.150 14:53:06 -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:12:33.150 14:53:06 -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:12:33.150 14:53:06 -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:12:33.150 14:53:06 -- target/rpc.sh@36 -- # (( 0 == 0 )) 00:12:33.150 14:53:06 -- target/rpc.sh@38 -- # '[' rdma == tcp ']' 00:12:33.150 14:53:06 -- target/rpc.sh@46 -- # MALLOC_BDEV_SIZE=64 00:12:33.150 14:53:06 -- target/rpc.sh@47 -- # MALLOC_BLOCK_SIZE=512 00:12:33.150 14:53:06 -- target/rpc.sh@49 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:12:33.150 14:53:06 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:33.150 14:53:06 -- common/autotest_common.sh@10 -- # set +x 00:12:33.150 Malloc1 00:12:33.150 14:53:06 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:33.150 14:53:06 -- target/rpc.sh@52 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:12:33.150 14:53:06 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:33.150 14:53:06 -- common/autotest_common.sh@10 -- # set +x 00:12:33.150 14:53:06 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:33.150 14:53:06 -- target/rpc.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:33.150 14:53:06 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:33.150 14:53:06 -- common/autotest_common.sh@10 -- # set +x 00:12:33.150 14:53:06 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:33.150 14:53:06 -- target/rpc.sh@54 -- # rpc_cmd nvmf_subsystem_allow_any_host -d nqn.2016-06.io.spdk:cnode1 00:12:33.150 14:53:06 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:33.150 14:53:06 -- common/autotest_common.sh@10 -- # set +x 00:12:33.150 14:53:06 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:33.150 14:53:06 -- target/rpc.sh@55 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:33.150 14:53:06 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:33.150 14:53:06 -- common/autotest_common.sh@10 -- # set +x 00:12:33.150 [2024-12-01 14:53:06.205447] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:33.150 14:53:06 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:33.150 14:53:06 -- target/rpc.sh@58 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:2d843004-a791-47f3-8dd7-3d04462c368b --hostid=2d843004-a791-47f3-8dd7-3d04462c368b -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:2d843004-a791-47f3-8dd7-3d04462c368b -a 10.0.0.2 -s 4420 00:12:33.150 14:53:06 -- common/autotest_common.sh@650 -- # local es=0 00:12:33.150 14:53:06 -- common/autotest_common.sh@652 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:2d843004-a791-47f3-8dd7-3d04462c368b --hostid=2d843004-a791-47f3-8dd7-3d04462c368b -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:2d843004-a791-47f3-8dd7-3d04462c368b -a 10.0.0.2 -s 4420 00:12:33.150 14:53:06 -- common/autotest_common.sh@638 -- # local arg=nvme 00:12:33.150 14:53:06 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:33.150 14:53:06 -- common/autotest_common.sh@642 -- # type -t nvme 00:12:33.150 14:53:06 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:33.150 14:53:06 -- common/autotest_common.sh@644 -- # type -P nvme 00:12:33.150 14:53:06 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:33.150 14:53:06 -- common/autotest_common.sh@644 -- # arg=/usr/sbin/nvme 00:12:33.150 14:53:06 -- common/autotest_common.sh@644 -- # [[ -x /usr/sbin/nvme ]] 00:12:33.150 14:53:06 -- common/autotest_common.sh@653 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:2d843004-a791-47f3-8dd7-3d04462c368b --hostid=2d843004-a791-47f3-8dd7-3d04462c368b -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:2d843004-a791-47f3-8dd7-3d04462c368b -a 10.0.0.2 -s 4420 00:12:33.150 [2024-12-01 14:53:06.233836] ctrlr.c: 715:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:2d843004-a791-47f3-8dd7-3d04462c368b' 00:12:33.150 Failed to write to /dev/nvme-fabrics: Input/output error 00:12:33.150 could not add new controller: failed to write to nvme-fabrics device 00:12:33.150 14:53:06 -- common/autotest_common.sh@653 -- # es=1 00:12:33.150 14:53:06 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:12:33.150 14:53:06 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:12:33.150 14:53:06 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:12:33.150 14:53:06 -- target/rpc.sh@61 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:2d843004-a791-47f3-8dd7-3d04462c368b 00:12:33.150 14:53:06 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:33.150 14:53:06 -- common/autotest_common.sh@10 -- # set +x 00:12:33.150 14:53:06 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:33.150 14:53:06 -- target/rpc.sh@62 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:2d843004-a791-47f3-8dd7-3d04462c368b --hostid=2d843004-a791-47f3-8dd7-3d04462c368b -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:33.409 14:53:06 -- target/rpc.sh@63 -- # waitforserial SPDKISFASTANDAWESOME 00:12:33.409 14:53:06 -- common/autotest_common.sh@1187 -- # local i=0 00:12:33.409 14:53:06 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:12:33.409 14:53:06 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:12:33.409 14:53:06 -- common/autotest_common.sh@1194 -- # sleep 2 00:12:35.314 14:53:08 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:12:35.314 14:53:08 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:12:35.314 14:53:08 -- common/autotest_common.sh@1196 -- # grep -c SPDKISFASTANDAWESOME 00:12:35.575 14:53:08 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:12:35.575 14:53:08 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:12:35.575 14:53:08 -- common/autotest_common.sh@1197 -- # return 0 00:12:35.575 14:53:08 -- target/rpc.sh@64 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:35.575 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:35.575 14:53:08 -- target/rpc.sh@65 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:35.575 14:53:08 -- common/autotest_common.sh@1208 -- # local i=0 00:12:35.575 14:53:08 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:12:35.575 14:53:08 -- common/autotest_common.sh@1209 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:35.575 14:53:08 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:12:35.575 14:53:08 -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:35.575 14:53:08 -- common/autotest_common.sh@1220 -- # return 0 00:12:35.575 14:53:08 -- target/rpc.sh@68 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:2d843004-a791-47f3-8dd7-3d04462c368b 00:12:35.575 14:53:08 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:35.575 14:53:08 -- common/autotest_common.sh@10 -- # set +x 00:12:35.575 14:53:08 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:35.575 14:53:08 -- target/rpc.sh@69 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:2d843004-a791-47f3-8dd7-3d04462c368b --hostid=2d843004-a791-47f3-8dd7-3d04462c368b -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:35.575 14:53:08 -- common/autotest_common.sh@650 -- # local es=0 00:12:35.575 14:53:08 -- common/autotest_common.sh@652 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:2d843004-a791-47f3-8dd7-3d04462c368b --hostid=2d843004-a791-47f3-8dd7-3d04462c368b -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:35.575 14:53:08 -- common/autotest_common.sh@638 -- # local arg=nvme 00:12:35.575 14:53:08 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:35.575 14:53:08 -- common/autotest_common.sh@642 -- # type -t nvme 00:12:35.575 14:53:08 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:35.575 14:53:08 -- common/autotest_common.sh@644 -- # type -P nvme 00:12:35.575 14:53:08 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:35.575 14:53:08 -- common/autotest_common.sh@644 -- # arg=/usr/sbin/nvme 00:12:35.575 14:53:08 -- common/autotest_common.sh@644 -- # [[ -x /usr/sbin/nvme ]] 00:12:35.575 14:53:08 -- common/autotest_common.sh@653 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:2d843004-a791-47f3-8dd7-3d04462c368b --hostid=2d843004-a791-47f3-8dd7-3d04462c368b -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:35.575 [2024-12-01 14:53:08.635116] ctrlr.c: 715:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:2d843004-a791-47f3-8dd7-3d04462c368b' 00:12:35.575 Failed to write to /dev/nvme-fabrics: Input/output error 00:12:35.575 could not add new controller: failed to write to nvme-fabrics device 00:12:35.575 14:53:08 -- common/autotest_common.sh@653 -- # es=1 00:12:35.575 14:53:08 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:12:35.575 14:53:08 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:12:35.575 14:53:08 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:12:35.575 14:53:08 -- target/rpc.sh@72 -- # rpc_cmd nvmf_subsystem_allow_any_host -e nqn.2016-06.io.spdk:cnode1 00:12:35.575 14:53:08 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:35.575 14:53:08 -- common/autotest_common.sh@10 -- # set +x 00:12:35.575 14:53:08 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:35.575 14:53:08 -- target/rpc.sh@73 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:2d843004-a791-47f3-8dd7-3d04462c368b --hostid=2d843004-a791-47f3-8dd7-3d04462c368b -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:35.839 14:53:08 -- target/rpc.sh@74 -- # waitforserial SPDKISFASTANDAWESOME 00:12:35.839 14:53:08 -- common/autotest_common.sh@1187 -- # local i=0 00:12:35.839 14:53:08 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:12:35.839 14:53:08 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:12:35.839 14:53:08 -- common/autotest_common.sh@1194 -- # sleep 2 00:12:37.773 14:53:10 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:12:37.773 14:53:10 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:12:37.773 14:53:10 -- common/autotest_common.sh@1196 -- # grep -c SPDKISFASTANDAWESOME 00:12:37.773 14:53:10 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:12:37.773 14:53:10 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:12:37.773 14:53:10 -- common/autotest_common.sh@1197 -- # return 0 00:12:37.773 14:53:10 -- target/rpc.sh@75 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:37.773 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:37.773 14:53:10 -- target/rpc.sh@76 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:37.773 14:53:10 -- common/autotest_common.sh@1208 -- # local i=0 00:12:37.773 14:53:10 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:12:37.773 14:53:10 -- common/autotest_common.sh@1209 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:38.032 14:53:10 -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:38.032 14:53:10 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:12:38.032 14:53:10 -- common/autotest_common.sh@1220 -- # return 0 00:12:38.032 14:53:10 -- target/rpc.sh@78 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:38.032 14:53:10 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:38.032 14:53:10 -- common/autotest_common.sh@10 -- # set +x 00:12:38.032 14:53:10 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:38.032 14:53:10 -- target/rpc.sh@81 -- # seq 1 5 00:12:38.032 14:53:10 -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:12:38.032 14:53:10 -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:38.032 14:53:10 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:38.032 14:53:10 -- common/autotest_common.sh@10 -- # set +x 00:12:38.032 14:53:10 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:38.032 14:53:10 -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:38.032 14:53:10 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:38.032 14:53:10 -- common/autotest_common.sh@10 -- # set +x 00:12:38.032 [2024-12-01 14:53:10.939024] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:38.032 14:53:10 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:38.032 14:53:10 -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:12:38.032 14:53:10 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:38.032 14:53:10 -- common/autotest_common.sh@10 -- # set +x 00:12:38.032 14:53:10 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:38.032 14:53:10 -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:38.032 14:53:10 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:38.032 14:53:10 -- common/autotest_common.sh@10 -- # set +x 00:12:38.032 14:53:10 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:38.032 14:53:10 -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:2d843004-a791-47f3-8dd7-3d04462c368b --hostid=2d843004-a791-47f3-8dd7-3d04462c368b -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:38.032 14:53:11 -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:12:38.032 14:53:11 -- common/autotest_common.sh@1187 -- # local i=0 00:12:38.032 14:53:11 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:12:38.032 14:53:11 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:12:38.032 14:53:11 -- common/autotest_common.sh@1194 -- # sleep 2 00:12:40.566 14:53:13 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:12:40.566 14:53:13 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:12:40.566 14:53:13 -- common/autotest_common.sh@1196 -- # grep -c SPDKISFASTANDAWESOME 00:12:40.566 14:53:13 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:12:40.566 14:53:13 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:12:40.566 14:53:13 -- common/autotest_common.sh@1197 -- # return 0 00:12:40.566 14:53:13 -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:40.566 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:40.566 14:53:13 -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:40.566 14:53:13 -- common/autotest_common.sh@1208 -- # local i=0 00:12:40.566 14:53:13 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:12:40.566 14:53:13 -- common/autotest_common.sh@1209 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:40.566 14:53:13 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:12:40.566 14:53:13 -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:40.566 14:53:13 -- common/autotest_common.sh@1220 -- # return 0 00:12:40.566 14:53:13 -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:12:40.566 14:53:13 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:40.566 14:53:13 -- common/autotest_common.sh@10 -- # set +x 00:12:40.566 14:53:13 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:40.566 14:53:13 -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:40.566 14:53:13 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:40.566 14:53:13 -- common/autotest_common.sh@10 -- # set +x 00:12:40.566 14:53:13 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:40.566 14:53:13 -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:12:40.566 14:53:13 -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:40.566 14:53:13 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:40.566 14:53:13 -- common/autotest_common.sh@10 -- # set +x 00:12:40.566 14:53:13 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:40.566 14:53:13 -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:40.566 14:53:13 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:40.566 14:53:13 -- common/autotest_common.sh@10 -- # set +x 00:12:40.566 [2024-12-01 14:53:13.247424] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:40.566 14:53:13 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:40.566 14:53:13 -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:12:40.566 14:53:13 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:40.566 14:53:13 -- common/autotest_common.sh@10 -- # set +x 00:12:40.566 14:53:13 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:40.566 14:53:13 -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:40.566 14:53:13 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:40.566 14:53:13 -- common/autotest_common.sh@10 -- # set +x 00:12:40.566 14:53:13 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:40.566 14:53:13 -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:2d843004-a791-47f3-8dd7-3d04462c368b --hostid=2d843004-a791-47f3-8dd7-3d04462c368b -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:40.566 14:53:13 -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:12:40.566 14:53:13 -- common/autotest_common.sh@1187 -- # local i=0 00:12:40.566 14:53:13 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:12:40.566 14:53:13 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:12:40.566 14:53:13 -- common/autotest_common.sh@1194 -- # sleep 2 00:12:42.468 14:53:15 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:12:42.468 14:53:15 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:12:42.468 14:53:15 -- common/autotest_common.sh@1196 -- # grep -c SPDKISFASTANDAWESOME 00:12:42.468 14:53:15 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:12:42.468 14:53:15 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:12:42.468 14:53:15 -- common/autotest_common.sh@1197 -- # return 0 00:12:42.468 14:53:15 -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:42.468 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:42.468 14:53:15 -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:42.468 14:53:15 -- common/autotest_common.sh@1208 -- # local i=0 00:12:42.468 14:53:15 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:12:42.468 14:53:15 -- common/autotest_common.sh@1209 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:42.468 14:53:15 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:12:42.468 14:53:15 -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:42.468 14:53:15 -- common/autotest_common.sh@1220 -- # return 0 00:12:42.468 14:53:15 -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:12:42.468 14:53:15 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:42.468 14:53:15 -- common/autotest_common.sh@10 -- # set +x 00:12:42.468 14:53:15 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:42.468 14:53:15 -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:42.468 14:53:15 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:42.468 14:53:15 -- common/autotest_common.sh@10 -- # set +x 00:12:42.468 14:53:15 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:42.468 14:53:15 -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:12:42.468 14:53:15 -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:42.468 14:53:15 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:42.468 14:53:15 -- common/autotest_common.sh@10 -- # set +x 00:12:42.468 14:53:15 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:42.468 14:53:15 -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:42.468 14:53:15 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:42.468 14:53:15 -- common/autotest_common.sh@10 -- # set +x 00:12:42.468 [2024-12-01 14:53:15.543776] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:42.468 14:53:15 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:42.468 14:53:15 -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:12:42.468 14:53:15 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:42.468 14:53:15 -- common/autotest_common.sh@10 -- # set +x 00:12:42.468 14:53:15 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:42.468 14:53:15 -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:42.468 14:53:15 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:42.468 14:53:15 -- common/autotest_common.sh@10 -- # set +x 00:12:42.468 14:53:15 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:42.468 14:53:15 -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:2d843004-a791-47f3-8dd7-3d04462c368b --hostid=2d843004-a791-47f3-8dd7-3d04462c368b -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:42.727 14:53:15 -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:12:42.727 14:53:15 -- common/autotest_common.sh@1187 -- # local i=0 00:12:42.727 14:53:15 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:12:42.727 14:53:15 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:12:42.727 14:53:15 -- common/autotest_common.sh@1194 -- # sleep 2 00:12:44.632 14:53:17 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:12:44.632 14:53:17 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:12:44.632 14:53:17 -- common/autotest_common.sh@1196 -- # grep -c SPDKISFASTANDAWESOME 00:12:44.891 14:53:17 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:12:44.891 14:53:17 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:12:44.892 14:53:17 -- common/autotest_common.sh@1197 -- # return 0 00:12:44.892 14:53:17 -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:44.892 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:44.892 14:53:17 -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:44.892 14:53:17 -- common/autotest_common.sh@1208 -- # local i=0 00:12:44.892 14:53:17 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:12:44.892 14:53:17 -- common/autotest_common.sh@1209 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:44.892 14:53:17 -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:44.892 14:53:17 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:12:44.892 14:53:17 -- common/autotest_common.sh@1220 -- # return 0 00:12:44.892 14:53:17 -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:12:44.892 14:53:17 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:44.892 14:53:17 -- common/autotest_common.sh@10 -- # set +x 00:12:44.892 14:53:17 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:44.892 14:53:17 -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:44.892 14:53:17 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:44.892 14:53:17 -- common/autotest_common.sh@10 -- # set +x 00:12:44.892 14:53:17 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:44.892 14:53:17 -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:12:44.892 14:53:17 -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:44.892 14:53:17 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:44.892 14:53:17 -- common/autotest_common.sh@10 -- # set +x 00:12:44.892 14:53:17 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:44.892 14:53:17 -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:44.892 14:53:17 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:44.892 14:53:17 -- common/autotest_common.sh@10 -- # set +x 00:12:44.892 [2024-12-01 14:53:17.852940] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:44.892 14:53:17 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:44.892 14:53:17 -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:12:44.892 14:53:17 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:44.892 14:53:17 -- common/autotest_common.sh@10 -- # set +x 00:12:44.892 14:53:17 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:44.892 14:53:17 -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:44.892 14:53:17 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:44.892 14:53:17 -- common/autotest_common.sh@10 -- # set +x 00:12:44.892 14:53:17 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:44.892 14:53:17 -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:2d843004-a791-47f3-8dd7-3d04462c368b --hostid=2d843004-a791-47f3-8dd7-3d04462c368b -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:45.151 14:53:18 -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:12:45.151 14:53:18 -- common/autotest_common.sh@1187 -- # local i=0 00:12:45.151 14:53:18 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:12:45.151 14:53:18 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:12:45.151 14:53:18 -- common/autotest_common.sh@1194 -- # sleep 2 00:12:47.058 14:53:20 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:12:47.058 14:53:20 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:12:47.058 14:53:20 -- common/autotest_common.sh@1196 -- # grep -c SPDKISFASTANDAWESOME 00:12:47.058 14:53:20 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:12:47.058 14:53:20 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:12:47.058 14:53:20 -- common/autotest_common.sh@1197 -- # return 0 00:12:47.058 14:53:20 -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:47.058 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:47.058 14:53:20 -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:47.058 14:53:20 -- common/autotest_common.sh@1208 -- # local i=0 00:12:47.058 14:53:20 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:12:47.058 14:53:20 -- common/autotest_common.sh@1209 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:47.058 14:53:20 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:12:47.058 14:53:20 -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:47.058 14:53:20 -- common/autotest_common.sh@1220 -- # return 0 00:12:47.058 14:53:20 -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:12:47.058 14:53:20 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:47.058 14:53:20 -- common/autotest_common.sh@10 -- # set +x 00:12:47.058 14:53:20 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:47.058 14:53:20 -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:47.058 14:53:20 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:47.058 14:53:20 -- common/autotest_common.sh@10 -- # set +x 00:12:47.058 14:53:20 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:47.058 14:53:20 -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:12:47.058 14:53:20 -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:47.058 14:53:20 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:47.058 14:53:20 -- common/autotest_common.sh@10 -- # set +x 00:12:47.058 14:53:20 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:47.058 14:53:20 -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:47.058 14:53:20 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:47.058 14:53:20 -- common/autotest_common.sh@10 -- # set +x 00:12:47.058 [2024-12-01 14:53:20.161896] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:47.058 14:53:20 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:47.058 14:53:20 -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:12:47.058 14:53:20 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:47.058 14:53:20 -- common/autotest_common.sh@10 -- # set +x 00:12:47.318 14:53:20 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:47.318 14:53:20 -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:47.318 14:53:20 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:47.318 14:53:20 -- common/autotest_common.sh@10 -- # set +x 00:12:47.318 14:53:20 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:47.318 14:53:20 -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:2d843004-a791-47f3-8dd7-3d04462c368b --hostid=2d843004-a791-47f3-8dd7-3d04462c368b -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:47.318 14:53:20 -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:12:47.318 14:53:20 -- common/autotest_common.sh@1187 -- # local i=0 00:12:47.318 14:53:20 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:12:47.318 14:53:20 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:12:47.318 14:53:20 -- common/autotest_common.sh@1194 -- # sleep 2 00:12:49.851 14:53:22 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:12:49.851 14:53:22 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:12:49.851 14:53:22 -- common/autotest_common.sh@1196 -- # grep -c SPDKISFASTANDAWESOME 00:12:49.851 14:53:22 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:12:49.851 14:53:22 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:12:49.851 14:53:22 -- common/autotest_common.sh@1197 -- # return 0 00:12:49.851 14:53:22 -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:49.851 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:49.851 14:53:22 -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:49.851 14:53:22 -- common/autotest_common.sh@1208 -- # local i=0 00:12:49.851 14:53:22 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:12:49.851 14:53:22 -- common/autotest_common.sh@1209 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:49.851 14:53:22 -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:49.851 14:53:22 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:12:49.851 14:53:22 -- common/autotest_common.sh@1220 -- # return 0 00:12:49.851 14:53:22 -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:12:49.851 14:53:22 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:49.851 14:53:22 -- common/autotest_common.sh@10 -- # set +x 00:12:49.851 14:53:22 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:49.851 14:53:22 -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:49.851 14:53:22 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:49.851 14:53:22 -- common/autotest_common.sh@10 -- # set +x 00:12:49.851 14:53:22 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:49.852 14:53:22 -- target/rpc.sh@99 -- # seq 1 5 00:12:49.852 14:53:22 -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:12:49.852 14:53:22 -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:49.852 14:53:22 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:49.852 14:53:22 -- common/autotest_common.sh@10 -- # set +x 00:12:49.852 14:53:22 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:49.852 14:53:22 -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:49.852 14:53:22 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:49.852 14:53:22 -- common/autotest_common.sh@10 -- # set +x 00:12:49.852 [2024-12-01 14:53:22.494998] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:49.852 14:53:22 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:49.852 14:53:22 -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:49.852 14:53:22 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:49.852 14:53:22 -- common/autotest_common.sh@10 -- # set +x 00:12:49.852 14:53:22 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:49.852 14:53:22 -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:49.852 14:53:22 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:49.852 14:53:22 -- common/autotest_common.sh@10 -- # set +x 00:12:49.852 14:53:22 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:49.852 14:53:22 -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:49.852 14:53:22 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:49.852 14:53:22 -- common/autotest_common.sh@10 -- # set +x 00:12:49.852 14:53:22 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:49.852 14:53:22 -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:49.852 14:53:22 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:49.852 14:53:22 -- common/autotest_common.sh@10 -- # set +x 00:12:49.852 14:53:22 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:49.852 14:53:22 -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:12:49.852 14:53:22 -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:49.852 14:53:22 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:49.852 14:53:22 -- common/autotest_common.sh@10 -- # set +x 00:12:49.852 14:53:22 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:49.852 14:53:22 -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:49.852 14:53:22 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:49.852 14:53:22 -- common/autotest_common.sh@10 -- # set +x 00:12:49.852 [2024-12-01 14:53:22.543072] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:49.852 14:53:22 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:49.852 14:53:22 -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:49.852 14:53:22 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:49.852 14:53:22 -- common/autotest_common.sh@10 -- # set +x 00:12:49.852 14:53:22 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:49.852 14:53:22 -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:49.852 14:53:22 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:49.852 14:53:22 -- common/autotest_common.sh@10 -- # set +x 00:12:49.852 14:53:22 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:49.852 14:53:22 -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:49.852 14:53:22 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:49.852 14:53:22 -- common/autotest_common.sh@10 -- # set +x 00:12:49.852 14:53:22 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:49.852 14:53:22 -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:49.852 14:53:22 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:49.852 14:53:22 -- common/autotest_common.sh@10 -- # set +x 00:12:49.852 14:53:22 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:49.852 14:53:22 -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:12:49.852 14:53:22 -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:49.852 14:53:22 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:49.852 14:53:22 -- common/autotest_common.sh@10 -- # set +x 00:12:49.852 14:53:22 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:49.852 14:53:22 -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:49.852 14:53:22 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:49.852 14:53:22 -- common/autotest_common.sh@10 -- # set +x 00:12:49.852 [2024-12-01 14:53:22.595128] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:49.852 14:53:22 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:49.852 14:53:22 -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:49.852 14:53:22 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:49.852 14:53:22 -- common/autotest_common.sh@10 -- # set +x 00:12:49.852 14:53:22 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:49.852 14:53:22 -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:49.852 14:53:22 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:49.852 14:53:22 -- common/autotest_common.sh@10 -- # set +x 00:12:49.852 14:53:22 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:49.852 14:53:22 -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:49.852 14:53:22 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:49.852 14:53:22 -- common/autotest_common.sh@10 -- # set +x 00:12:49.852 14:53:22 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:49.852 14:53:22 -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:49.852 14:53:22 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:49.852 14:53:22 -- common/autotest_common.sh@10 -- # set +x 00:12:49.852 14:53:22 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:49.852 14:53:22 -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:12:49.852 14:53:22 -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:49.852 14:53:22 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:49.852 14:53:22 -- common/autotest_common.sh@10 -- # set +x 00:12:49.852 14:53:22 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:49.852 14:53:22 -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:49.852 14:53:22 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:49.852 14:53:22 -- common/autotest_common.sh@10 -- # set +x 00:12:49.852 [2024-12-01 14:53:22.643220] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:49.852 14:53:22 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:49.852 14:53:22 -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:49.852 14:53:22 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:49.852 14:53:22 -- common/autotest_common.sh@10 -- # set +x 00:12:49.852 14:53:22 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:49.852 14:53:22 -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:49.852 14:53:22 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:49.852 14:53:22 -- common/autotest_common.sh@10 -- # set +x 00:12:49.852 14:53:22 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:49.852 14:53:22 -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:49.852 14:53:22 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:49.852 14:53:22 -- common/autotest_common.sh@10 -- # set +x 00:12:49.852 14:53:22 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:49.852 14:53:22 -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:49.852 14:53:22 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:49.852 14:53:22 -- common/autotest_common.sh@10 -- # set +x 00:12:49.852 14:53:22 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:49.852 14:53:22 -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:12:49.852 14:53:22 -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:49.852 14:53:22 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:49.852 14:53:22 -- common/autotest_common.sh@10 -- # set +x 00:12:49.852 14:53:22 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:49.852 14:53:22 -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:49.852 14:53:22 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:49.852 14:53:22 -- common/autotest_common.sh@10 -- # set +x 00:12:49.852 [2024-12-01 14:53:22.691312] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:49.852 14:53:22 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:49.852 14:53:22 -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:49.852 14:53:22 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:49.852 14:53:22 -- common/autotest_common.sh@10 -- # set +x 00:12:49.852 14:53:22 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:49.852 14:53:22 -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:49.852 14:53:22 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:49.852 14:53:22 -- common/autotest_common.sh@10 -- # set +x 00:12:49.852 14:53:22 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:49.852 14:53:22 -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:49.852 14:53:22 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:49.852 14:53:22 -- common/autotest_common.sh@10 -- # set +x 00:12:49.852 14:53:22 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:49.852 14:53:22 -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:49.852 14:53:22 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:49.852 14:53:22 -- common/autotest_common.sh@10 -- # set +x 00:12:49.852 14:53:22 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:49.852 14:53:22 -- target/rpc.sh@110 -- # rpc_cmd nvmf_get_stats 00:12:49.852 14:53:22 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:49.852 14:53:22 -- common/autotest_common.sh@10 -- # set +x 00:12:49.852 14:53:22 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:49.852 14:53:22 -- target/rpc.sh@110 -- # stats='{ 00:12:49.852 "poll_groups": [ 00:12:49.852 { 00:12:49.852 "admin_qpairs": 2, 00:12:49.852 "completed_nvme_io": 67, 00:12:49.852 "current_admin_qpairs": 0, 00:12:49.852 "current_io_qpairs": 0, 00:12:49.852 "io_qpairs": 16, 00:12:49.852 "name": "nvmf_tgt_poll_group_0", 00:12:49.853 "pending_bdev_io": 0, 00:12:49.853 "transports": [ 00:12:49.853 { 00:12:49.853 "trtype": "TCP" 00:12:49.853 } 00:12:49.853 ] 00:12:49.853 }, 00:12:49.853 { 00:12:49.853 "admin_qpairs": 3, 00:12:49.853 "completed_nvme_io": 67, 00:12:49.853 "current_admin_qpairs": 0, 00:12:49.853 "current_io_qpairs": 0, 00:12:49.853 "io_qpairs": 17, 00:12:49.853 "name": "nvmf_tgt_poll_group_1", 00:12:49.853 "pending_bdev_io": 0, 00:12:49.853 "transports": [ 00:12:49.853 { 00:12:49.853 "trtype": "TCP" 00:12:49.853 } 00:12:49.853 ] 00:12:49.853 }, 00:12:49.853 { 00:12:49.853 "admin_qpairs": 1, 00:12:49.853 "completed_nvme_io": 121, 00:12:49.853 "current_admin_qpairs": 0, 00:12:49.853 "current_io_qpairs": 0, 00:12:49.853 "io_qpairs": 19, 00:12:49.853 "name": "nvmf_tgt_poll_group_2", 00:12:49.853 "pending_bdev_io": 0, 00:12:49.853 "transports": [ 00:12:49.853 { 00:12:49.853 "trtype": "TCP" 00:12:49.853 } 00:12:49.853 ] 00:12:49.853 }, 00:12:49.853 { 00:12:49.853 "admin_qpairs": 1, 00:12:49.853 "completed_nvme_io": 165, 00:12:49.853 "current_admin_qpairs": 0, 00:12:49.853 "current_io_qpairs": 0, 00:12:49.853 "io_qpairs": 18, 00:12:49.853 "name": "nvmf_tgt_poll_group_3", 00:12:49.853 "pending_bdev_io": 0, 00:12:49.853 "transports": [ 00:12:49.853 { 00:12:49.853 "trtype": "TCP" 00:12:49.853 } 00:12:49.853 ] 00:12:49.853 } 00:12:49.853 ], 00:12:49.853 "tick_rate": 2200000000 00:12:49.853 }' 00:12:49.853 14:53:22 -- target/rpc.sh@112 -- # jsum '.poll_groups[].admin_qpairs' 00:12:49.853 14:53:22 -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:12:49.853 14:53:22 -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:12:49.853 14:53:22 -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:12:49.853 14:53:22 -- target/rpc.sh@112 -- # (( 7 > 0 )) 00:12:49.853 14:53:22 -- target/rpc.sh@113 -- # jsum '.poll_groups[].io_qpairs' 00:12:49.853 14:53:22 -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:12:49.853 14:53:22 -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:12:49.853 14:53:22 -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:12:49.853 14:53:22 -- target/rpc.sh@113 -- # (( 70 > 0 )) 00:12:49.853 14:53:22 -- target/rpc.sh@115 -- # '[' rdma == tcp ']' 00:12:49.853 14:53:22 -- target/rpc.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:12:49.853 14:53:22 -- target/rpc.sh@123 -- # nvmftestfini 00:12:49.853 14:53:22 -- nvmf/common.sh@476 -- # nvmfcleanup 00:12:49.853 14:53:22 -- nvmf/common.sh@116 -- # sync 00:12:49.853 14:53:22 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:12:49.853 14:53:22 -- nvmf/common.sh@119 -- # set +e 00:12:49.853 14:53:22 -- nvmf/common.sh@120 -- # for i in {1..20} 00:12:49.853 14:53:22 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:12:49.853 rmmod nvme_tcp 00:12:49.853 rmmod nvme_fabrics 00:12:49.853 rmmod nvme_keyring 00:12:49.853 14:53:22 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:12:49.853 14:53:22 -- nvmf/common.sh@123 -- # set -e 00:12:49.853 14:53:22 -- nvmf/common.sh@124 -- # return 0 00:12:49.853 14:53:22 -- nvmf/common.sh@477 -- # '[' -n 78056 ']' 00:12:49.853 14:53:22 -- nvmf/common.sh@478 -- # killprocess 78056 00:12:49.853 14:53:22 -- common/autotest_common.sh@936 -- # '[' -z 78056 ']' 00:12:49.853 14:53:22 -- common/autotest_common.sh@940 -- # kill -0 78056 00:12:49.853 14:53:22 -- common/autotest_common.sh@941 -- # uname 00:12:49.853 14:53:22 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:12:49.853 14:53:22 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 78056 00:12:50.112 killing process with pid 78056 00:12:50.112 14:53:22 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:12:50.112 14:53:22 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:12:50.112 14:53:22 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 78056' 00:12:50.112 14:53:22 -- common/autotest_common.sh@955 -- # kill 78056 00:12:50.112 14:53:22 -- common/autotest_common.sh@960 -- # wait 78056 00:12:50.112 14:53:23 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:12:50.112 14:53:23 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:12:50.112 14:53:23 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:12:50.112 14:53:23 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:50.112 14:53:23 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:12:50.112 14:53:23 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:50.112 14:53:23 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:50.112 14:53:23 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:50.371 14:53:23 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:12:50.371 00:12:50.371 real 0m19.035s 00:12:50.371 user 1m12.321s 00:12:50.371 sys 0m1.972s 00:12:50.371 14:53:23 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:12:50.371 ************************************ 00:12:50.371 END TEST nvmf_rpc 00:12:50.371 ************************************ 00:12:50.371 14:53:23 -- common/autotest_common.sh@10 -- # set +x 00:12:50.371 14:53:23 -- nvmf/nvmf.sh@30 -- # run_test nvmf_invalid /home/vagrant/spdk_repo/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:12:50.371 14:53:23 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:12:50.371 14:53:23 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:12:50.371 14:53:23 -- common/autotest_common.sh@10 -- # set +x 00:12:50.371 ************************************ 00:12:50.371 START TEST nvmf_invalid 00:12:50.371 ************************************ 00:12:50.371 14:53:23 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:12:50.371 * Looking for test storage... 00:12:50.371 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:12:50.371 14:53:23 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:12:50.371 14:53:23 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:12:50.371 14:53:23 -- common/autotest_common.sh@1690 -- # lcov --version 00:12:50.371 14:53:23 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:12:50.371 14:53:23 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:12:50.371 14:53:23 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:12:50.371 14:53:23 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:12:50.371 14:53:23 -- scripts/common.sh@335 -- # IFS=.-: 00:12:50.371 14:53:23 -- scripts/common.sh@335 -- # read -ra ver1 00:12:50.371 14:53:23 -- scripts/common.sh@336 -- # IFS=.-: 00:12:50.371 14:53:23 -- scripts/common.sh@336 -- # read -ra ver2 00:12:50.371 14:53:23 -- scripts/common.sh@337 -- # local 'op=<' 00:12:50.371 14:53:23 -- scripts/common.sh@339 -- # ver1_l=2 00:12:50.371 14:53:23 -- scripts/common.sh@340 -- # ver2_l=1 00:12:50.371 14:53:23 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:12:50.371 14:53:23 -- scripts/common.sh@343 -- # case "$op" in 00:12:50.371 14:53:23 -- scripts/common.sh@344 -- # : 1 00:12:50.371 14:53:23 -- scripts/common.sh@363 -- # (( v = 0 )) 00:12:50.371 14:53:23 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:50.371 14:53:23 -- scripts/common.sh@364 -- # decimal 1 00:12:50.371 14:53:23 -- scripts/common.sh@352 -- # local d=1 00:12:50.371 14:53:23 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:50.371 14:53:23 -- scripts/common.sh@354 -- # echo 1 00:12:50.371 14:53:23 -- scripts/common.sh@364 -- # ver1[v]=1 00:12:50.371 14:53:23 -- scripts/common.sh@365 -- # decimal 2 00:12:50.371 14:53:23 -- scripts/common.sh@352 -- # local d=2 00:12:50.371 14:53:23 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:50.371 14:53:23 -- scripts/common.sh@354 -- # echo 2 00:12:50.630 14:53:23 -- scripts/common.sh@365 -- # ver2[v]=2 00:12:50.630 14:53:23 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:12:50.631 14:53:23 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:12:50.631 14:53:23 -- scripts/common.sh@367 -- # return 0 00:12:50.631 14:53:23 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:50.631 14:53:23 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:12:50.631 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:50.631 --rc genhtml_branch_coverage=1 00:12:50.631 --rc genhtml_function_coverage=1 00:12:50.631 --rc genhtml_legend=1 00:12:50.631 --rc geninfo_all_blocks=1 00:12:50.631 --rc geninfo_unexecuted_blocks=1 00:12:50.631 00:12:50.631 ' 00:12:50.631 14:53:23 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:12:50.631 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:50.631 --rc genhtml_branch_coverage=1 00:12:50.631 --rc genhtml_function_coverage=1 00:12:50.631 --rc genhtml_legend=1 00:12:50.631 --rc geninfo_all_blocks=1 00:12:50.631 --rc geninfo_unexecuted_blocks=1 00:12:50.631 00:12:50.631 ' 00:12:50.631 14:53:23 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:12:50.631 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:50.631 --rc genhtml_branch_coverage=1 00:12:50.631 --rc genhtml_function_coverage=1 00:12:50.631 --rc genhtml_legend=1 00:12:50.631 --rc geninfo_all_blocks=1 00:12:50.631 --rc geninfo_unexecuted_blocks=1 00:12:50.631 00:12:50.631 ' 00:12:50.631 14:53:23 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:12:50.631 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:50.631 --rc genhtml_branch_coverage=1 00:12:50.631 --rc genhtml_function_coverage=1 00:12:50.631 --rc genhtml_legend=1 00:12:50.631 --rc geninfo_all_blocks=1 00:12:50.631 --rc geninfo_unexecuted_blocks=1 00:12:50.631 00:12:50.631 ' 00:12:50.631 14:53:23 -- target/invalid.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:12:50.631 14:53:23 -- nvmf/common.sh@7 -- # uname -s 00:12:50.631 14:53:23 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:50.631 14:53:23 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:50.631 14:53:23 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:50.631 14:53:23 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:50.631 14:53:23 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:50.631 14:53:23 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:50.631 14:53:23 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:50.631 14:53:23 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:50.631 14:53:23 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:50.631 14:53:23 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:50.631 14:53:23 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:2d843004-a791-47f3-8dd7-3d04462c368b 00:12:50.631 14:53:23 -- nvmf/common.sh@18 -- # NVME_HOSTID=2d843004-a791-47f3-8dd7-3d04462c368b 00:12:50.631 14:53:23 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:50.631 14:53:23 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:50.631 14:53:23 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:12:50.631 14:53:23 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:12:50.631 14:53:23 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:50.631 14:53:23 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:50.631 14:53:23 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:50.631 14:53:23 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:50.631 14:53:23 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:50.631 14:53:23 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:50.631 14:53:23 -- paths/export.sh@5 -- # export PATH 00:12:50.631 14:53:23 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:50.631 14:53:23 -- nvmf/common.sh@46 -- # : 0 00:12:50.631 14:53:23 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:12:50.631 14:53:23 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:12:50.631 14:53:23 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:12:50.631 14:53:23 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:50.631 14:53:23 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:50.631 14:53:23 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:12:50.631 14:53:23 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:12:50.631 14:53:23 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:12:50.631 14:53:23 -- target/invalid.sh@11 -- # multi_target_rpc=/home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py 00:12:50.631 14:53:23 -- target/invalid.sh@12 -- # rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:12:50.631 14:53:23 -- target/invalid.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode 00:12:50.631 14:53:23 -- target/invalid.sh@14 -- # target=foobar 00:12:50.631 14:53:23 -- target/invalid.sh@16 -- # RANDOM=0 00:12:50.631 14:53:23 -- target/invalid.sh@34 -- # nvmftestinit 00:12:50.631 14:53:23 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:12:50.631 14:53:23 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:50.631 14:53:23 -- nvmf/common.sh@436 -- # prepare_net_devs 00:12:50.631 14:53:23 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:12:50.631 14:53:23 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:12:50.631 14:53:23 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:50.631 14:53:23 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:50.631 14:53:23 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:50.631 14:53:23 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:12:50.631 14:53:23 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:12:50.631 14:53:23 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:12:50.631 14:53:23 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:12:50.631 14:53:23 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:12:50.631 14:53:23 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:12:50.631 14:53:23 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:50.631 14:53:23 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:50.631 14:53:23 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:12:50.631 14:53:23 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:12:50.631 14:53:23 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:12:50.631 14:53:23 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:12:50.631 14:53:23 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:12:50.631 14:53:23 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:50.631 14:53:23 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:12:50.631 14:53:23 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:12:50.631 14:53:23 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:12:50.631 14:53:23 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:12:50.631 14:53:23 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:12:50.631 14:53:23 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:12:50.631 Cannot find device "nvmf_tgt_br" 00:12:50.631 14:53:23 -- nvmf/common.sh@154 -- # true 00:12:50.631 14:53:23 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:12:50.631 Cannot find device "nvmf_tgt_br2" 00:12:50.631 14:53:23 -- nvmf/common.sh@155 -- # true 00:12:50.631 14:53:23 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:12:50.631 14:53:23 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:12:50.631 Cannot find device "nvmf_tgt_br" 00:12:50.631 14:53:23 -- nvmf/common.sh@157 -- # true 00:12:50.631 14:53:23 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:12:50.631 Cannot find device "nvmf_tgt_br2" 00:12:50.631 14:53:23 -- nvmf/common.sh@158 -- # true 00:12:50.631 14:53:23 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:12:50.631 14:53:23 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:12:50.631 14:53:23 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:12:50.631 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:12:50.631 14:53:23 -- nvmf/common.sh@161 -- # true 00:12:50.631 14:53:23 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:12:50.631 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:12:50.631 14:53:23 -- nvmf/common.sh@162 -- # true 00:12:50.631 14:53:23 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:12:50.631 14:53:23 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:12:50.631 14:53:23 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:12:50.631 14:53:23 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:12:50.631 14:53:23 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:12:50.631 14:53:23 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:12:50.891 14:53:23 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:12:50.891 14:53:23 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:12:50.891 14:53:23 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:12:50.891 14:53:23 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:12:50.891 14:53:23 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:12:50.891 14:53:23 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:12:50.891 14:53:23 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:12:50.891 14:53:23 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:12:50.891 14:53:23 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:12:50.891 14:53:23 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:12:50.891 14:53:23 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:12:50.891 14:53:23 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:12:50.891 14:53:23 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:12:50.891 14:53:23 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:12:50.891 14:53:23 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:12:50.891 14:53:23 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:12:50.891 14:53:23 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:12:50.891 14:53:23 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:12:50.891 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:50.891 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.103 ms 00:12:50.891 00:12:50.891 --- 10.0.0.2 ping statistics --- 00:12:50.891 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:50.891 rtt min/avg/max/mdev = 0.103/0.103/0.103/0.000 ms 00:12:50.891 14:53:23 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:12:50.891 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:12:50.891 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.043 ms 00:12:50.891 00:12:50.891 --- 10.0.0.3 ping statistics --- 00:12:50.891 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:50.891 rtt min/avg/max/mdev = 0.043/0.043/0.043/0.000 ms 00:12:50.891 14:53:23 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:12:50.891 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:50.891 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.024 ms 00:12:50.891 00:12:50.891 --- 10.0.0.1 ping statistics --- 00:12:50.891 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:50.891 rtt min/avg/max/mdev = 0.024/0.024/0.024/0.000 ms 00:12:50.891 14:53:23 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:50.891 14:53:23 -- nvmf/common.sh@421 -- # return 0 00:12:50.891 14:53:23 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:12:50.891 14:53:23 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:50.891 14:53:23 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:12:50.891 14:53:23 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:12:50.891 14:53:23 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:50.891 14:53:23 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:12:50.891 14:53:23 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:12:50.891 14:53:23 -- target/invalid.sh@35 -- # nvmfappstart -m 0xF 00:12:50.891 14:53:23 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:12:50.891 14:53:23 -- common/autotest_common.sh@722 -- # xtrace_disable 00:12:50.891 14:53:23 -- common/autotest_common.sh@10 -- # set +x 00:12:50.891 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:50.891 14:53:23 -- nvmf/common.sh@469 -- # nvmfpid=78570 00:12:50.891 14:53:23 -- nvmf/common.sh@470 -- # waitforlisten 78570 00:12:50.891 14:53:23 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:50.891 14:53:23 -- common/autotest_common.sh@829 -- # '[' -z 78570 ']' 00:12:50.891 14:53:23 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:50.891 14:53:23 -- common/autotest_common.sh@834 -- # local max_retries=100 00:12:50.891 14:53:23 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:50.891 14:53:23 -- common/autotest_common.sh@838 -- # xtrace_disable 00:12:50.891 14:53:23 -- common/autotest_common.sh@10 -- # set +x 00:12:50.891 [2024-12-01 14:53:23.952439] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:12:50.891 [2024-12-01 14:53:23.952529] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:51.150 [2024-12-01 14:53:24.085762] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:51.150 [2024-12-01 14:53:24.136458] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:12:51.150 [2024-12-01 14:53:24.136598] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:51.150 [2024-12-01 14:53:24.136610] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:51.150 [2024-12-01 14:53:24.136617] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:51.150 [2024-12-01 14:53:24.137225] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:12:51.150 [2024-12-01 14:53:24.137343] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:12:51.150 [2024-12-01 14:53:24.137441] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:12:51.150 [2024-12-01 14:53:24.137446] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:52.087 14:53:24 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:12:52.087 14:53:24 -- common/autotest_common.sh@862 -- # return 0 00:12:52.087 14:53:24 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:12:52.087 14:53:24 -- common/autotest_common.sh@728 -- # xtrace_disable 00:12:52.087 14:53:24 -- common/autotest_common.sh@10 -- # set +x 00:12:52.087 14:53:24 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:52.087 14:53:24 -- target/invalid.sh@37 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:12:52.087 14:53:24 -- target/invalid.sh@40 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem -t foobar nqn.2016-06.io.spdk:cnode21215 00:12:52.087 [2024-12-01 14:53:25.196822] nvmf_rpc.c: 401:rpc_nvmf_create_subsystem: *ERROR*: Unable to find target foobar 00:12:52.346 14:53:25 -- target/invalid.sh@40 -- # out='2024/12/01 14:53:25 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[nqn:nqn.2016-06.io.spdk:cnode21215 tgt_name:foobar], err: error received for nvmf_create_subsystem method, err: Code=-32603 Msg=Unable to find target foobar 00:12:52.346 request: 00:12:52.346 { 00:12:52.346 "method": "nvmf_create_subsystem", 00:12:52.346 "params": { 00:12:52.346 "nqn": "nqn.2016-06.io.spdk:cnode21215", 00:12:52.346 "tgt_name": "foobar" 00:12:52.346 } 00:12:52.346 } 00:12:52.346 Got JSON-RPC error response 00:12:52.346 GoRPCClient: error on JSON-RPC call' 00:12:52.346 14:53:25 -- target/invalid.sh@41 -- # [[ 2024/12/01 14:53:25 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[nqn:nqn.2016-06.io.spdk:cnode21215 tgt_name:foobar], err: error received for nvmf_create_subsystem method, err: Code=-32603 Msg=Unable to find target foobar 00:12:52.346 request: 00:12:52.346 { 00:12:52.346 "method": "nvmf_create_subsystem", 00:12:52.346 "params": { 00:12:52.346 "nqn": "nqn.2016-06.io.spdk:cnode21215", 00:12:52.346 "tgt_name": "foobar" 00:12:52.346 } 00:12:52.346 } 00:12:52.346 Got JSON-RPC error response 00:12:52.346 GoRPCClient: error on JSON-RPC call == *\U\n\a\b\l\e\ \t\o\ \f\i\n\d\ \t\a\r\g\e\t* ]] 00:12:52.346 14:53:25 -- target/invalid.sh@45 -- # echo -e '\x1f' 00:12:52.346 14:53:25 -- target/invalid.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem -s $'SPDKISFASTANDAWESOME\037' nqn.2016-06.io.spdk:cnode6665 00:12:52.346 [2024-12-01 14:53:25.421208] nvmf_rpc.c: 418:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode6665: invalid serial number 'SPDKISFASTANDAWESOME' 00:12:52.346 14:53:25 -- target/invalid.sh@45 -- # out='2024/12/01 14:53:25 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[nqn:nqn.2016-06.io.spdk:cnode6665 serial_number:SPDKISFASTANDAWESOME], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid SN SPDKISFASTANDAWESOME 00:12:52.346 request: 00:12:52.346 { 00:12:52.346 "method": "nvmf_create_subsystem", 00:12:52.346 "params": { 00:12:52.346 "nqn": "nqn.2016-06.io.spdk:cnode6665", 00:12:52.346 "serial_number": "SPDKISFASTANDAWESOME\u001f" 00:12:52.346 } 00:12:52.346 } 00:12:52.346 Got JSON-RPC error response 00:12:52.346 GoRPCClient: error on JSON-RPC call' 00:12:52.346 14:53:25 -- target/invalid.sh@46 -- # [[ 2024/12/01 14:53:25 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[nqn:nqn.2016-06.io.spdk:cnode6665 serial_number:SPDKISFASTANDAWESOME], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid SN SPDKISFASTANDAWESOME 00:12:52.346 request: 00:12:52.346 { 00:12:52.346 "method": "nvmf_create_subsystem", 00:12:52.346 "params": { 00:12:52.346 "nqn": "nqn.2016-06.io.spdk:cnode6665", 00:12:52.346 "serial_number": "SPDKISFASTANDAWESOME\u001f" 00:12:52.346 } 00:12:52.346 } 00:12:52.346 Got JSON-RPC error response 00:12:52.346 GoRPCClient: error on JSON-RPC call == *\I\n\v\a\l\i\d\ \S\N* ]] 00:12:52.346 14:53:25 -- target/invalid.sh@50 -- # echo -e '\x1f' 00:12:52.346 14:53:25 -- target/invalid.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem -d $'SPDK_Controller\037' nqn.2016-06.io.spdk:cnode23230 00:12:52.606 [2024-12-01 14:53:25.645513] nvmf_rpc.c: 427:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode23230: invalid model number 'SPDK_Controller' 00:12:52.606 14:53:25 -- target/invalid.sh@50 -- # out='2024/12/01 14:53:25 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[model_number:SPDK_Controller nqn:nqn.2016-06.io.spdk:cnode23230], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid MN SPDK_Controller 00:12:52.606 request: 00:12:52.606 { 00:12:52.606 "method": "nvmf_create_subsystem", 00:12:52.606 "params": { 00:12:52.606 "nqn": "nqn.2016-06.io.spdk:cnode23230", 00:12:52.606 "model_number": "SPDK_Controller\u001f" 00:12:52.606 } 00:12:52.606 } 00:12:52.606 Got JSON-RPC error response 00:12:52.606 GoRPCClient: error on JSON-RPC call' 00:12:52.606 14:53:25 -- target/invalid.sh@51 -- # [[ 2024/12/01 14:53:25 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[model_number:SPDK_Controller nqn:nqn.2016-06.io.spdk:cnode23230], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid MN SPDK_Controller 00:12:52.606 request: 00:12:52.606 { 00:12:52.606 "method": "nvmf_create_subsystem", 00:12:52.606 "params": { 00:12:52.606 "nqn": "nqn.2016-06.io.spdk:cnode23230", 00:12:52.606 "model_number": "SPDK_Controller\u001f" 00:12:52.606 } 00:12:52.606 } 00:12:52.606 Got JSON-RPC error response 00:12:52.606 GoRPCClient: error on JSON-RPC call == *\I\n\v\a\l\i\d\ \M\N* ]] 00:12:52.606 14:53:25 -- target/invalid.sh@54 -- # gen_random_s 21 00:12:52.606 14:53:25 -- target/invalid.sh@19 -- # local length=21 ll 00:12:52.606 14:53:25 -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:12:52.606 14:53:25 -- target/invalid.sh@21 -- # local chars 00:12:52.606 14:53:25 -- target/invalid.sh@22 -- # local string 00:12:52.606 14:53:25 -- target/invalid.sh@24 -- # (( ll = 0 )) 00:12:52.606 14:53:25 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:52.606 14:53:25 -- target/invalid.sh@25 -- # printf %x 70 00:12:52.606 14:53:25 -- target/invalid.sh@25 -- # echo -e '\x46' 00:12:52.606 14:53:25 -- target/invalid.sh@25 -- # string+=F 00:12:52.606 14:53:25 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:52.606 14:53:25 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:52.606 14:53:25 -- target/invalid.sh@25 -- # printf %x 49 00:12:52.606 14:53:25 -- target/invalid.sh@25 -- # echo -e '\x31' 00:12:52.606 14:53:25 -- target/invalid.sh@25 -- # string+=1 00:12:52.606 14:53:25 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:52.606 14:53:25 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:52.606 14:53:25 -- target/invalid.sh@25 -- # printf %x 83 00:12:52.606 14:53:25 -- target/invalid.sh@25 -- # echo -e '\x53' 00:12:52.606 14:53:25 -- target/invalid.sh@25 -- # string+=S 00:12:52.606 14:53:25 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:52.606 14:53:25 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:52.606 14:53:25 -- target/invalid.sh@25 -- # printf %x 32 00:12:52.606 14:53:25 -- target/invalid.sh@25 -- # echo -e '\x20' 00:12:52.606 14:53:25 -- target/invalid.sh@25 -- # string+=' ' 00:12:52.606 14:53:25 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:52.606 14:53:25 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:52.606 14:53:25 -- target/invalid.sh@25 -- # printf %x 67 00:12:52.606 14:53:25 -- target/invalid.sh@25 -- # echo -e '\x43' 00:12:52.606 14:53:25 -- target/invalid.sh@25 -- # string+=C 00:12:52.606 14:53:25 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:52.606 14:53:25 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:52.606 14:53:25 -- target/invalid.sh@25 -- # printf %x 125 00:12:52.606 14:53:25 -- target/invalid.sh@25 -- # echo -e '\x7d' 00:12:52.606 14:53:25 -- target/invalid.sh@25 -- # string+='}' 00:12:52.606 14:53:25 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:52.606 14:53:25 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:52.606 14:53:25 -- target/invalid.sh@25 -- # printf %x 33 00:12:52.606 14:53:25 -- target/invalid.sh@25 -- # echo -e '\x21' 00:12:52.606 14:53:25 -- target/invalid.sh@25 -- # string+='!' 00:12:52.606 14:53:25 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:52.606 14:53:25 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:52.606 14:53:25 -- target/invalid.sh@25 -- # printf %x 59 00:12:52.606 14:53:25 -- target/invalid.sh@25 -- # echo -e '\x3b' 00:12:52.606 14:53:25 -- target/invalid.sh@25 -- # string+=';' 00:12:52.606 14:53:25 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:52.606 14:53:25 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:52.606 14:53:25 -- target/invalid.sh@25 -- # printf %x 36 00:12:52.866 14:53:25 -- target/invalid.sh@25 -- # echo -e '\x24' 00:12:52.866 14:53:25 -- target/invalid.sh@25 -- # string+='$' 00:12:52.866 14:53:25 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:52.866 14:53:25 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:52.866 14:53:25 -- target/invalid.sh@25 -- # printf %x 60 00:12:52.866 14:53:25 -- target/invalid.sh@25 -- # echo -e '\x3c' 00:12:52.866 14:53:25 -- target/invalid.sh@25 -- # string+='<' 00:12:52.866 14:53:25 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:52.866 14:53:25 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:52.866 14:53:25 -- target/invalid.sh@25 -- # printf %x 68 00:12:52.866 14:53:25 -- target/invalid.sh@25 -- # echo -e '\x44' 00:12:52.866 14:53:25 -- target/invalid.sh@25 -- # string+=D 00:12:52.866 14:53:25 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:52.866 14:53:25 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:52.866 14:53:25 -- target/invalid.sh@25 -- # printf %x 121 00:12:52.866 14:53:25 -- target/invalid.sh@25 -- # echo -e '\x79' 00:12:52.866 14:53:25 -- target/invalid.sh@25 -- # string+=y 00:12:52.866 14:53:25 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:52.866 14:53:25 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:52.866 14:53:25 -- target/invalid.sh@25 -- # printf %x 113 00:12:52.866 14:53:25 -- target/invalid.sh@25 -- # echo -e '\x71' 00:12:52.866 14:53:25 -- target/invalid.sh@25 -- # string+=q 00:12:52.866 14:53:25 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:52.866 14:53:25 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:52.866 14:53:25 -- target/invalid.sh@25 -- # printf %x 57 00:12:52.866 14:53:25 -- target/invalid.sh@25 -- # echo -e '\x39' 00:12:52.866 14:53:25 -- target/invalid.sh@25 -- # string+=9 00:12:52.866 14:53:25 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:52.866 14:53:25 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:52.866 14:53:25 -- target/invalid.sh@25 -- # printf %x 109 00:12:52.866 14:53:25 -- target/invalid.sh@25 -- # echo -e '\x6d' 00:12:52.866 14:53:25 -- target/invalid.sh@25 -- # string+=m 00:12:52.866 14:53:25 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:52.866 14:53:25 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:52.866 14:53:25 -- target/invalid.sh@25 -- # printf %x 76 00:12:52.866 14:53:25 -- target/invalid.sh@25 -- # echo -e '\x4c' 00:12:52.866 14:53:25 -- target/invalid.sh@25 -- # string+=L 00:12:52.866 14:53:25 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:52.866 14:53:25 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:52.866 14:53:25 -- target/invalid.sh@25 -- # printf %x 71 00:12:52.866 14:53:25 -- target/invalid.sh@25 -- # echo -e '\x47' 00:12:52.866 14:53:25 -- target/invalid.sh@25 -- # string+=G 00:12:52.866 14:53:25 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:52.866 14:53:25 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:52.866 14:53:25 -- target/invalid.sh@25 -- # printf %x 103 00:12:52.866 14:53:25 -- target/invalid.sh@25 -- # echo -e '\x67' 00:12:52.866 14:53:25 -- target/invalid.sh@25 -- # string+=g 00:12:52.866 14:53:25 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:52.866 14:53:25 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:52.866 14:53:25 -- target/invalid.sh@25 -- # printf %x 123 00:12:52.866 14:53:25 -- target/invalid.sh@25 -- # echo -e '\x7b' 00:12:52.866 14:53:25 -- target/invalid.sh@25 -- # string+='{' 00:12:52.866 14:53:25 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:52.866 14:53:25 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:52.866 14:53:25 -- target/invalid.sh@25 -- # printf %x 119 00:12:52.866 14:53:25 -- target/invalid.sh@25 -- # echo -e '\x77' 00:12:52.866 14:53:25 -- target/invalid.sh@25 -- # string+=w 00:12:52.866 14:53:25 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:52.866 14:53:25 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:52.866 14:53:25 -- target/invalid.sh@25 -- # printf %x 109 00:12:52.866 14:53:25 -- target/invalid.sh@25 -- # echo -e '\x6d' 00:12:52.866 14:53:25 -- target/invalid.sh@25 -- # string+=m 00:12:52.866 14:53:25 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:52.866 14:53:25 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:52.866 14:53:25 -- target/invalid.sh@28 -- # [[ F == \- ]] 00:12:52.866 14:53:25 -- target/invalid.sh@31 -- # echo 'F1S C}!;$ /dev/null' 00:12:56.009 14:53:29 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:56.009 14:53:29 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:12:56.009 00:12:56.009 real 0m5.784s 00:12:56.009 user 0m22.842s 00:12:56.009 sys 0m1.269s 00:12:56.009 14:53:29 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:12:56.009 14:53:29 -- common/autotest_common.sh@10 -- # set +x 00:12:56.009 ************************************ 00:12:56.009 END TEST nvmf_invalid 00:12:56.009 ************************************ 00:12:56.267 14:53:29 -- nvmf/nvmf.sh@31 -- # run_test nvmf_abort /home/vagrant/spdk_repo/spdk/test/nvmf/target/abort.sh --transport=tcp 00:12:56.267 14:53:29 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:12:56.267 14:53:29 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:12:56.267 14:53:29 -- common/autotest_common.sh@10 -- # set +x 00:12:56.267 ************************************ 00:12:56.267 START TEST nvmf_abort 00:12:56.267 ************************************ 00:12:56.267 14:53:29 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/abort.sh --transport=tcp 00:12:56.267 * Looking for test storage... 00:12:56.267 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:12:56.267 14:53:29 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:12:56.268 14:53:29 -- common/autotest_common.sh@1690 -- # lcov --version 00:12:56.268 14:53:29 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:12:56.268 14:53:29 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:12:56.268 14:53:29 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:12:56.268 14:53:29 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:12:56.268 14:53:29 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:12:56.268 14:53:29 -- scripts/common.sh@335 -- # IFS=.-: 00:12:56.268 14:53:29 -- scripts/common.sh@335 -- # read -ra ver1 00:12:56.268 14:53:29 -- scripts/common.sh@336 -- # IFS=.-: 00:12:56.268 14:53:29 -- scripts/common.sh@336 -- # read -ra ver2 00:12:56.268 14:53:29 -- scripts/common.sh@337 -- # local 'op=<' 00:12:56.268 14:53:29 -- scripts/common.sh@339 -- # ver1_l=2 00:12:56.268 14:53:29 -- scripts/common.sh@340 -- # ver2_l=1 00:12:56.268 14:53:29 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:12:56.268 14:53:29 -- scripts/common.sh@343 -- # case "$op" in 00:12:56.268 14:53:29 -- scripts/common.sh@344 -- # : 1 00:12:56.268 14:53:29 -- scripts/common.sh@363 -- # (( v = 0 )) 00:12:56.268 14:53:29 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:56.268 14:53:29 -- scripts/common.sh@364 -- # decimal 1 00:12:56.268 14:53:29 -- scripts/common.sh@352 -- # local d=1 00:12:56.268 14:53:29 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:56.268 14:53:29 -- scripts/common.sh@354 -- # echo 1 00:12:56.268 14:53:29 -- scripts/common.sh@364 -- # ver1[v]=1 00:12:56.268 14:53:29 -- scripts/common.sh@365 -- # decimal 2 00:12:56.268 14:53:29 -- scripts/common.sh@352 -- # local d=2 00:12:56.268 14:53:29 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:56.268 14:53:29 -- scripts/common.sh@354 -- # echo 2 00:12:56.268 14:53:29 -- scripts/common.sh@365 -- # ver2[v]=2 00:12:56.268 14:53:29 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:12:56.268 14:53:29 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:12:56.268 14:53:29 -- scripts/common.sh@367 -- # return 0 00:12:56.268 14:53:29 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:56.268 14:53:29 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:12:56.268 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:56.268 --rc genhtml_branch_coverage=1 00:12:56.268 --rc genhtml_function_coverage=1 00:12:56.268 --rc genhtml_legend=1 00:12:56.268 --rc geninfo_all_blocks=1 00:12:56.268 --rc geninfo_unexecuted_blocks=1 00:12:56.268 00:12:56.268 ' 00:12:56.268 14:53:29 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:12:56.268 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:56.268 --rc genhtml_branch_coverage=1 00:12:56.268 --rc genhtml_function_coverage=1 00:12:56.268 --rc genhtml_legend=1 00:12:56.268 --rc geninfo_all_blocks=1 00:12:56.268 --rc geninfo_unexecuted_blocks=1 00:12:56.268 00:12:56.268 ' 00:12:56.268 14:53:29 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:12:56.268 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:56.268 --rc genhtml_branch_coverage=1 00:12:56.268 --rc genhtml_function_coverage=1 00:12:56.268 --rc genhtml_legend=1 00:12:56.268 --rc geninfo_all_blocks=1 00:12:56.268 --rc geninfo_unexecuted_blocks=1 00:12:56.268 00:12:56.268 ' 00:12:56.268 14:53:29 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:12:56.268 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:56.268 --rc genhtml_branch_coverage=1 00:12:56.268 --rc genhtml_function_coverage=1 00:12:56.268 --rc genhtml_legend=1 00:12:56.268 --rc geninfo_all_blocks=1 00:12:56.268 --rc geninfo_unexecuted_blocks=1 00:12:56.268 00:12:56.268 ' 00:12:56.268 14:53:29 -- target/abort.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:12:56.268 14:53:29 -- nvmf/common.sh@7 -- # uname -s 00:12:56.268 14:53:29 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:56.268 14:53:29 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:56.268 14:53:29 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:56.268 14:53:29 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:56.268 14:53:29 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:56.268 14:53:29 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:56.268 14:53:29 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:56.268 14:53:29 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:56.268 14:53:29 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:56.268 14:53:29 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:56.268 14:53:29 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:2d843004-a791-47f3-8dd7-3d04462c368b 00:12:56.268 14:53:29 -- nvmf/common.sh@18 -- # NVME_HOSTID=2d843004-a791-47f3-8dd7-3d04462c368b 00:12:56.268 14:53:29 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:56.268 14:53:29 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:56.268 14:53:29 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:12:56.268 14:53:29 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:12:56.268 14:53:29 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:56.268 14:53:29 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:56.268 14:53:29 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:56.268 14:53:29 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:56.268 14:53:29 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:56.268 14:53:29 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:56.268 14:53:29 -- paths/export.sh@5 -- # export PATH 00:12:56.268 14:53:29 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:56.268 14:53:29 -- nvmf/common.sh@46 -- # : 0 00:12:56.268 14:53:29 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:12:56.268 14:53:29 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:12:56.268 14:53:29 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:12:56.268 14:53:29 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:56.268 14:53:29 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:56.268 14:53:29 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:12:56.268 14:53:29 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:12:56.268 14:53:29 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:12:56.268 14:53:29 -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:12:56.268 14:53:29 -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:12:56.268 14:53:29 -- target/abort.sh@14 -- # nvmftestinit 00:12:56.268 14:53:29 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:12:56.268 14:53:29 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:56.268 14:53:29 -- nvmf/common.sh@436 -- # prepare_net_devs 00:12:56.268 14:53:29 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:12:56.268 14:53:29 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:12:56.268 14:53:29 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:56.268 14:53:29 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:56.268 14:53:29 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:56.268 14:53:29 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:12:56.268 14:53:29 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:12:56.268 14:53:29 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:12:56.268 14:53:29 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:12:56.268 14:53:29 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:12:56.268 14:53:29 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:12:56.268 14:53:29 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:56.268 14:53:29 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:56.268 14:53:29 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:12:56.268 14:53:29 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:12:56.268 14:53:29 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:12:56.268 14:53:29 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:12:56.268 14:53:29 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:12:56.268 14:53:29 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:56.268 14:53:29 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:12:56.268 14:53:29 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:12:56.268 14:53:29 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:12:56.268 14:53:29 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:12:56.268 14:53:29 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:12:56.268 14:53:29 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:12:56.268 Cannot find device "nvmf_tgt_br" 00:12:56.268 14:53:29 -- nvmf/common.sh@154 -- # true 00:12:56.268 14:53:29 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:12:56.526 Cannot find device "nvmf_tgt_br2" 00:12:56.526 14:53:29 -- nvmf/common.sh@155 -- # true 00:12:56.526 14:53:29 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:12:56.526 14:53:29 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:12:56.526 Cannot find device "nvmf_tgt_br" 00:12:56.526 14:53:29 -- nvmf/common.sh@157 -- # true 00:12:56.526 14:53:29 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:12:56.526 Cannot find device "nvmf_tgt_br2" 00:12:56.526 14:53:29 -- nvmf/common.sh@158 -- # true 00:12:56.526 14:53:29 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:12:56.526 14:53:29 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:12:56.526 14:53:29 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:12:56.526 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:12:56.526 14:53:29 -- nvmf/common.sh@161 -- # true 00:12:56.526 14:53:29 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:12:56.526 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:12:56.526 14:53:29 -- nvmf/common.sh@162 -- # true 00:12:56.526 14:53:29 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:12:56.526 14:53:29 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:12:56.526 14:53:29 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:12:56.526 14:53:29 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:12:56.526 14:53:29 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:12:56.526 14:53:29 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:12:56.526 14:53:29 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:12:56.526 14:53:29 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:12:56.526 14:53:29 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:12:56.526 14:53:29 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:12:56.526 14:53:29 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:12:56.526 14:53:29 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:12:56.526 14:53:29 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:12:56.526 14:53:29 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:12:56.526 14:53:29 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:12:56.526 14:53:29 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:12:56.526 14:53:29 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:12:56.526 14:53:29 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:12:56.526 14:53:29 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:12:56.526 14:53:29 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:12:56.526 14:53:29 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:12:56.784 14:53:29 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:12:56.785 14:53:29 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:12:56.785 14:53:29 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:12:56.785 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:56.785 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.077 ms 00:12:56.785 00:12:56.785 --- 10.0.0.2 ping statistics --- 00:12:56.785 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:56.785 rtt min/avg/max/mdev = 0.077/0.077/0.077/0.000 ms 00:12:56.785 14:53:29 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:12:56.785 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:12:56.785 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.110 ms 00:12:56.785 00:12:56.785 --- 10.0.0.3 ping statistics --- 00:12:56.785 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:56.785 rtt min/avg/max/mdev = 0.110/0.110/0.110/0.000 ms 00:12:56.785 14:53:29 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:12:56.785 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:56.785 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.022 ms 00:12:56.785 00:12:56.785 --- 10.0.0.1 ping statistics --- 00:12:56.785 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:56.785 rtt min/avg/max/mdev = 0.022/0.022/0.022/0.000 ms 00:12:56.785 14:53:29 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:56.785 14:53:29 -- nvmf/common.sh@421 -- # return 0 00:12:56.785 14:53:29 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:12:56.785 14:53:29 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:56.785 14:53:29 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:12:56.785 14:53:29 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:12:56.785 14:53:29 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:56.785 14:53:29 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:12:56.785 14:53:29 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:12:56.785 14:53:29 -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:12:56.785 14:53:29 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:12:56.785 14:53:29 -- common/autotest_common.sh@722 -- # xtrace_disable 00:12:56.785 14:53:29 -- common/autotest_common.sh@10 -- # set +x 00:12:56.785 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:56.785 14:53:29 -- nvmf/common.sh@469 -- # nvmfpid=79084 00:12:56.785 14:53:29 -- nvmf/common.sh@470 -- # waitforlisten 79084 00:12:56.785 14:53:29 -- common/autotest_common.sh@829 -- # '[' -z 79084 ']' 00:12:56.785 14:53:29 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:56.785 14:53:29 -- common/autotest_common.sh@834 -- # local max_retries=100 00:12:56.785 14:53:29 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:56.785 14:53:29 -- common/autotest_common.sh@838 -- # xtrace_disable 00:12:56.785 14:53:29 -- common/autotest_common.sh@10 -- # set +x 00:12:56.785 14:53:29 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:12:56.785 [2024-12-01 14:53:29.733672] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:12:56.785 [2024-12-01 14:53:29.733743] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:56.785 [2024-12-01 14:53:29.866425] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:12:57.043 [2024-12-01 14:53:29.954926] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:12:57.043 [2024-12-01 14:53:29.955071] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:57.043 [2024-12-01 14:53:29.955084] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:57.043 [2024-12-01 14:53:29.955092] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:57.043 [2024-12-01 14:53:29.955217] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:12:57.043 [2024-12-01 14:53:29.955950] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:12:57.043 [2024-12-01 14:53:29.955956] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:12:57.607 14:53:30 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:12:57.607 14:53:30 -- common/autotest_common.sh@862 -- # return 0 00:12:57.607 14:53:30 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:12:57.607 14:53:30 -- common/autotest_common.sh@728 -- # xtrace_disable 00:12:57.607 14:53:30 -- common/autotest_common.sh@10 -- # set +x 00:12:57.607 14:53:30 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:57.607 14:53:30 -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -a 256 00:12:57.607 14:53:30 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:57.607 14:53:30 -- common/autotest_common.sh@10 -- # set +x 00:12:57.607 [2024-12-01 14:53:30.662337] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:57.607 14:53:30 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:57.607 14:53:30 -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:12:57.607 14:53:30 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:57.607 14:53:30 -- common/autotest_common.sh@10 -- # set +x 00:12:57.607 Malloc0 00:12:57.607 14:53:30 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:57.607 14:53:30 -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:12:57.607 14:53:30 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:57.607 14:53:30 -- common/autotest_common.sh@10 -- # set +x 00:12:57.866 Delay0 00:12:57.866 14:53:30 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:57.866 14:53:30 -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:12:57.866 14:53:30 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:57.866 14:53:30 -- common/autotest_common.sh@10 -- # set +x 00:12:57.866 14:53:30 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:57.866 14:53:30 -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:12:57.866 14:53:30 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:57.866 14:53:30 -- common/autotest_common.sh@10 -- # set +x 00:12:57.866 14:53:30 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:57.866 14:53:30 -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:12:57.866 14:53:30 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:57.866 14:53:30 -- common/autotest_common.sh@10 -- # set +x 00:12:57.866 [2024-12-01 14:53:30.742362] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:57.866 14:53:30 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:57.866 14:53:30 -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:12:57.866 14:53:30 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:57.866 14:53:30 -- common/autotest_common.sh@10 -- # set +x 00:12:57.866 14:53:30 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:57.866 14:53:30 -- target/abort.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:12:57.866 [2024-12-01 14:53:30.922375] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:13:00.400 Initializing NVMe Controllers 00:13:00.400 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:13:00.400 controller IO queue size 128 less than required 00:13:00.400 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:13:00.400 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:13:00.400 Initialization complete. Launching workers. 00:13:00.400 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 124, failed: 41129 00:13:00.400 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 41191, failed to submit 62 00:13:00.400 success 41129, unsuccess 62, failed 0 00:13:00.400 14:53:32 -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:13:00.400 14:53:32 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:00.400 14:53:32 -- common/autotest_common.sh@10 -- # set +x 00:13:00.400 14:53:32 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:00.400 14:53:32 -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:13:00.400 14:53:32 -- target/abort.sh@38 -- # nvmftestfini 00:13:00.400 14:53:32 -- nvmf/common.sh@476 -- # nvmfcleanup 00:13:00.400 14:53:32 -- nvmf/common.sh@116 -- # sync 00:13:00.400 14:53:33 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:13:00.400 14:53:33 -- nvmf/common.sh@119 -- # set +e 00:13:00.400 14:53:33 -- nvmf/common.sh@120 -- # for i in {1..20} 00:13:00.400 14:53:33 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:13:00.400 rmmod nvme_tcp 00:13:00.400 rmmod nvme_fabrics 00:13:00.400 rmmod nvme_keyring 00:13:00.400 14:53:33 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:13:00.400 14:53:33 -- nvmf/common.sh@123 -- # set -e 00:13:00.400 14:53:33 -- nvmf/common.sh@124 -- # return 0 00:13:00.400 14:53:33 -- nvmf/common.sh@477 -- # '[' -n 79084 ']' 00:13:00.400 14:53:33 -- nvmf/common.sh@478 -- # killprocess 79084 00:13:00.400 14:53:33 -- common/autotest_common.sh@936 -- # '[' -z 79084 ']' 00:13:00.400 14:53:33 -- common/autotest_common.sh@940 -- # kill -0 79084 00:13:00.400 14:53:33 -- common/autotest_common.sh@941 -- # uname 00:13:00.400 14:53:33 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:13:00.400 14:53:33 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 79084 00:13:00.400 14:53:33 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:13:00.400 14:53:33 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:13:00.400 killing process with pid 79084 00:13:00.400 14:53:33 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 79084' 00:13:00.400 14:53:33 -- common/autotest_common.sh@955 -- # kill 79084 00:13:00.400 14:53:33 -- common/autotest_common.sh@960 -- # wait 79084 00:13:00.400 14:53:33 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:13:00.400 14:53:33 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:13:00.400 14:53:33 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:13:00.400 14:53:33 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:13:00.400 14:53:33 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:13:00.400 14:53:33 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:00.400 14:53:33 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:00.400 14:53:33 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:00.400 14:53:33 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:13:00.400 00:13:00.400 real 0m4.315s 00:13:00.400 user 0m12.031s 00:13:00.400 sys 0m1.190s 00:13:00.400 14:53:33 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:13:00.400 14:53:33 -- common/autotest_common.sh@10 -- # set +x 00:13:00.400 ************************************ 00:13:00.400 END TEST nvmf_abort 00:13:00.400 ************************************ 00:13:00.400 14:53:33 -- nvmf/nvmf.sh@32 -- # run_test nvmf_ns_hotplug_stress /home/vagrant/spdk_repo/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:13:00.400 14:53:33 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:13:00.400 14:53:33 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:13:00.400 14:53:33 -- common/autotest_common.sh@10 -- # set +x 00:13:00.400 ************************************ 00:13:00.400 START TEST nvmf_ns_hotplug_stress 00:13:00.400 ************************************ 00:13:00.400 14:53:33 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:13:00.659 * Looking for test storage... 00:13:00.660 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:13:00.660 14:53:33 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:13:00.660 14:53:33 -- common/autotest_common.sh@1690 -- # lcov --version 00:13:00.660 14:53:33 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:13:00.660 14:53:33 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:13:00.660 14:53:33 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:13:00.660 14:53:33 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:13:00.660 14:53:33 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:13:00.660 14:53:33 -- scripts/common.sh@335 -- # IFS=.-: 00:13:00.660 14:53:33 -- scripts/common.sh@335 -- # read -ra ver1 00:13:00.660 14:53:33 -- scripts/common.sh@336 -- # IFS=.-: 00:13:00.660 14:53:33 -- scripts/common.sh@336 -- # read -ra ver2 00:13:00.660 14:53:33 -- scripts/common.sh@337 -- # local 'op=<' 00:13:00.660 14:53:33 -- scripts/common.sh@339 -- # ver1_l=2 00:13:00.660 14:53:33 -- scripts/common.sh@340 -- # ver2_l=1 00:13:00.660 14:53:33 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:13:00.660 14:53:33 -- scripts/common.sh@343 -- # case "$op" in 00:13:00.660 14:53:33 -- scripts/common.sh@344 -- # : 1 00:13:00.660 14:53:33 -- scripts/common.sh@363 -- # (( v = 0 )) 00:13:00.660 14:53:33 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:00.660 14:53:33 -- scripts/common.sh@364 -- # decimal 1 00:13:00.660 14:53:33 -- scripts/common.sh@352 -- # local d=1 00:13:00.660 14:53:33 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:00.660 14:53:33 -- scripts/common.sh@354 -- # echo 1 00:13:00.660 14:53:33 -- scripts/common.sh@364 -- # ver1[v]=1 00:13:00.660 14:53:33 -- scripts/common.sh@365 -- # decimal 2 00:13:00.660 14:53:33 -- scripts/common.sh@352 -- # local d=2 00:13:00.660 14:53:33 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:00.660 14:53:33 -- scripts/common.sh@354 -- # echo 2 00:13:00.660 14:53:33 -- scripts/common.sh@365 -- # ver2[v]=2 00:13:00.660 14:53:33 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:13:00.660 14:53:33 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:13:00.660 14:53:33 -- scripts/common.sh@367 -- # return 0 00:13:00.660 14:53:33 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:00.660 14:53:33 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:13:00.660 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:00.660 --rc genhtml_branch_coverage=1 00:13:00.660 --rc genhtml_function_coverage=1 00:13:00.660 --rc genhtml_legend=1 00:13:00.660 --rc geninfo_all_blocks=1 00:13:00.660 --rc geninfo_unexecuted_blocks=1 00:13:00.660 00:13:00.660 ' 00:13:00.660 14:53:33 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:13:00.660 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:00.660 --rc genhtml_branch_coverage=1 00:13:00.660 --rc genhtml_function_coverage=1 00:13:00.660 --rc genhtml_legend=1 00:13:00.660 --rc geninfo_all_blocks=1 00:13:00.660 --rc geninfo_unexecuted_blocks=1 00:13:00.660 00:13:00.660 ' 00:13:00.660 14:53:33 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:13:00.660 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:00.660 --rc genhtml_branch_coverage=1 00:13:00.660 --rc genhtml_function_coverage=1 00:13:00.660 --rc genhtml_legend=1 00:13:00.660 --rc geninfo_all_blocks=1 00:13:00.660 --rc geninfo_unexecuted_blocks=1 00:13:00.660 00:13:00.660 ' 00:13:00.660 14:53:33 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:13:00.660 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:00.660 --rc genhtml_branch_coverage=1 00:13:00.660 --rc genhtml_function_coverage=1 00:13:00.660 --rc genhtml_legend=1 00:13:00.660 --rc geninfo_all_blocks=1 00:13:00.660 --rc geninfo_unexecuted_blocks=1 00:13:00.660 00:13:00.660 ' 00:13:00.660 14:53:33 -- target/ns_hotplug_stress.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:13:00.660 14:53:33 -- nvmf/common.sh@7 -- # uname -s 00:13:00.660 14:53:33 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:00.660 14:53:33 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:00.660 14:53:33 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:00.660 14:53:33 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:00.660 14:53:33 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:00.660 14:53:33 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:00.660 14:53:33 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:00.660 14:53:33 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:00.660 14:53:33 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:00.660 14:53:33 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:00.660 14:53:33 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:2d843004-a791-47f3-8dd7-3d04462c368b 00:13:00.660 14:53:33 -- nvmf/common.sh@18 -- # NVME_HOSTID=2d843004-a791-47f3-8dd7-3d04462c368b 00:13:00.660 14:53:33 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:00.660 14:53:33 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:00.660 14:53:33 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:13:00.660 14:53:33 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:13:00.660 14:53:33 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:00.660 14:53:33 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:00.660 14:53:33 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:00.660 14:53:33 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:00.660 14:53:33 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:00.660 14:53:33 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:00.660 14:53:33 -- paths/export.sh@5 -- # export PATH 00:13:00.660 14:53:33 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:00.660 14:53:33 -- nvmf/common.sh@46 -- # : 0 00:13:00.660 14:53:33 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:13:00.660 14:53:33 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:13:00.660 14:53:33 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:13:00.660 14:53:33 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:00.660 14:53:33 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:00.660 14:53:33 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:13:00.660 14:53:33 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:13:00.660 14:53:33 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:13:00.660 14:53:33 -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:13:00.660 14:53:33 -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit 00:13:00.660 14:53:33 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:13:00.660 14:53:33 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:00.660 14:53:33 -- nvmf/common.sh@436 -- # prepare_net_devs 00:13:00.660 14:53:33 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:13:00.660 14:53:33 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:13:00.660 14:53:33 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:00.660 14:53:33 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:00.660 14:53:33 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:00.660 14:53:33 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:13:00.660 14:53:33 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:13:00.660 14:53:33 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:13:00.660 14:53:33 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:13:00.660 14:53:33 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:13:00.660 14:53:33 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:13:00.660 14:53:33 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:00.660 14:53:33 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:00.660 14:53:33 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:13:00.660 14:53:33 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:13:00.660 14:53:33 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:13:00.660 14:53:33 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:13:00.660 14:53:33 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:13:00.660 14:53:33 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:00.660 14:53:33 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:13:00.660 14:53:33 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:13:00.660 14:53:33 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:13:00.660 14:53:33 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:13:00.660 14:53:33 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:13:00.660 14:53:33 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:13:00.660 Cannot find device "nvmf_tgt_br" 00:13:00.660 14:53:33 -- nvmf/common.sh@154 -- # true 00:13:00.660 14:53:33 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:13:00.660 Cannot find device "nvmf_tgt_br2" 00:13:00.660 14:53:33 -- nvmf/common.sh@155 -- # true 00:13:00.660 14:53:33 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:13:00.660 14:53:33 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:13:00.660 Cannot find device "nvmf_tgt_br" 00:13:00.660 14:53:33 -- nvmf/common.sh@157 -- # true 00:13:00.661 14:53:33 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:13:00.661 Cannot find device "nvmf_tgt_br2" 00:13:00.661 14:53:33 -- nvmf/common.sh@158 -- # true 00:13:00.661 14:53:33 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:13:00.920 14:53:33 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:13:00.920 14:53:33 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:13:00.920 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:13:00.920 14:53:33 -- nvmf/common.sh@161 -- # true 00:13:00.920 14:53:33 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:13:00.920 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:13:00.920 14:53:33 -- nvmf/common.sh@162 -- # true 00:13:00.920 14:53:33 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:13:00.920 14:53:33 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:13:00.920 14:53:33 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:13:00.920 14:53:33 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:13:00.920 14:53:33 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:13:00.920 14:53:33 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:13:00.920 14:53:33 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:13:00.920 14:53:33 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:13:00.920 14:53:33 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:13:00.920 14:53:33 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:13:00.920 14:53:33 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:13:00.920 14:53:33 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:13:00.920 14:53:33 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:13:00.920 14:53:33 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:13:00.920 14:53:33 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:13:00.920 14:53:33 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:13:00.920 14:53:33 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:13:00.920 14:53:33 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:13:00.920 14:53:33 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:13:00.920 14:53:33 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:13:00.920 14:53:33 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:13:00.920 14:53:33 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:13:00.920 14:53:33 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:13:00.920 14:53:33 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:13:00.920 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:00.920 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.078 ms 00:13:00.920 00:13:00.920 --- 10.0.0.2 ping statistics --- 00:13:00.920 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:00.920 rtt min/avg/max/mdev = 0.078/0.078/0.078/0.000 ms 00:13:00.920 14:53:33 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:13:00.920 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:13:00.920 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.042 ms 00:13:00.920 00:13:00.920 --- 10.0.0.3 ping statistics --- 00:13:00.920 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:00.920 rtt min/avg/max/mdev = 0.042/0.042/0.042/0.000 ms 00:13:00.920 14:53:33 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:13:00.920 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:00.920 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.035 ms 00:13:00.920 00:13:00.920 --- 10.0.0.1 ping statistics --- 00:13:00.920 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:00.920 rtt min/avg/max/mdev = 0.035/0.035/0.035/0.000 ms 00:13:00.920 14:53:33 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:00.920 14:53:33 -- nvmf/common.sh@421 -- # return 0 00:13:00.920 14:53:33 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:13:00.920 14:53:33 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:00.920 14:53:33 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:13:00.920 14:53:33 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:13:00.920 14:53:33 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:00.921 14:53:33 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:13:00.921 14:53:33 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:13:00.921 14:53:34 -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE 00:13:00.921 14:53:34 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:13:00.921 14:53:34 -- common/autotest_common.sh@722 -- # xtrace_disable 00:13:00.921 14:53:34 -- common/autotest_common.sh@10 -- # set +x 00:13:00.921 14:53:34 -- nvmf/common.sh@469 -- # nvmfpid=79355 00:13:00.921 14:53:34 -- nvmf/common.sh@470 -- # waitforlisten 79355 00:13:00.921 14:53:34 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:13:00.921 14:53:34 -- common/autotest_common.sh@829 -- # '[' -z 79355 ']' 00:13:00.921 14:53:34 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:00.921 14:53:34 -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:00.921 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:00.921 14:53:34 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:00.921 14:53:34 -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:00.921 14:53:34 -- common/autotest_common.sh@10 -- # set +x 00:13:01.180 [2024-12-01 14:53:34.063485] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:13:01.180 [2024-12-01 14:53:34.063562] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:01.180 [2024-12-01 14:53:34.200010] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:13:01.180 [2024-12-01 14:53:34.266885] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:13:01.180 [2024-12-01 14:53:34.267067] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:01.180 [2024-12-01 14:53:34.267096] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:01.180 [2024-12-01 14:53:34.267109] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:01.180 [2024-12-01 14:53:34.267349] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:13:01.180 [2024-12-01 14:53:34.267626] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:13:01.180 [2024-12-01 14:53:34.267636] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:13:02.116 14:53:35 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:02.116 14:53:35 -- common/autotest_common.sh@862 -- # return 0 00:13:02.116 14:53:35 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:13:02.116 14:53:35 -- common/autotest_common.sh@728 -- # xtrace_disable 00:13:02.116 14:53:35 -- common/autotest_common.sh@10 -- # set +x 00:13:02.116 14:53:35 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:02.116 14:53:35 -- target/ns_hotplug_stress.sh@25 -- # null_size=1000 00:13:02.116 14:53:35 -- target/ns_hotplug_stress.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:13:02.374 [2024-12-01 14:53:35.385369] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:02.374 14:53:35 -- target/ns_hotplug_stress.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:13:02.632 14:53:35 -- target/ns_hotplug_stress.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:02.891 [2024-12-01 14:53:35.806102] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:02.891 14:53:35 -- target/ns_hotplug_stress.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:13:03.149 14:53:36 -- target/ns_hotplug_stress.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:13:03.149 Malloc0 00:13:03.408 14:53:36 -- target/ns_hotplug_stress.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:13:03.408 Delay0 00:13:03.408 14:53:36 -- target/ns_hotplug_stress.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:03.668 14:53:36 -- target/ns_hotplug_stress.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:13:03.927 NULL1 00:13:03.927 14:53:36 -- target/ns_hotplug_stress.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:13:04.185 14:53:37 -- target/ns_hotplug_stress.sh@40 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:13:04.185 14:53:37 -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=79486 00:13:04.185 14:53:37 -- target/ns_hotplug_stress.sh@44 -- # kill -0 79486 00:13:04.185 14:53:37 -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:05.562 Read completed with error (sct=0, sc=11) 00:13:05.562 14:53:38 -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:05.562 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:05.562 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:05.562 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:05.562 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:05.562 14:53:38 -- target/ns_hotplug_stress.sh@49 -- # null_size=1001 00:13:05.562 14:53:38 -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:13:05.821 true 00:13:05.821 14:53:38 -- target/ns_hotplug_stress.sh@44 -- # kill -0 79486 00:13:05.821 14:53:38 -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:06.775 14:53:39 -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:06.775 14:53:39 -- target/ns_hotplug_stress.sh@49 -- # null_size=1002 00:13:06.775 14:53:39 -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:13:07.053 true 00:13:07.053 14:53:40 -- target/ns_hotplug_stress.sh@44 -- # kill -0 79486 00:13:07.053 14:53:40 -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:07.322 14:53:40 -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:07.580 14:53:40 -- target/ns_hotplug_stress.sh@49 -- # null_size=1003 00:13:07.580 14:53:40 -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:13:07.839 true 00:13:07.839 14:53:40 -- target/ns_hotplug_stress.sh@44 -- # kill -0 79486 00:13:07.839 14:53:40 -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:08.776 14:53:41 -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:08.776 14:53:41 -- target/ns_hotplug_stress.sh@49 -- # null_size=1004 00:13:08.776 14:53:41 -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:13:09.035 true 00:13:09.035 14:53:42 -- target/ns_hotplug_stress.sh@44 -- # kill -0 79486 00:13:09.035 14:53:42 -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:09.293 14:53:42 -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:09.552 14:53:42 -- target/ns_hotplug_stress.sh@49 -- # null_size=1005 00:13:09.552 14:53:42 -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:13:09.810 true 00:13:09.810 14:53:42 -- target/ns_hotplug_stress.sh@44 -- # kill -0 79486 00:13:09.810 14:53:42 -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:10.746 14:53:43 -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:11.005 14:53:43 -- target/ns_hotplug_stress.sh@49 -- # null_size=1006 00:13:11.005 14:53:43 -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:13:11.005 true 00:13:11.005 14:53:44 -- target/ns_hotplug_stress.sh@44 -- # kill -0 79486 00:13:11.005 14:53:44 -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:11.264 14:53:44 -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:11.524 14:53:44 -- target/ns_hotplug_stress.sh@49 -- # null_size=1007 00:13:11.524 14:53:44 -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:13:11.783 true 00:13:11.783 14:53:44 -- target/ns_hotplug_stress.sh@44 -- # kill -0 79486 00:13:11.783 14:53:44 -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:12.717 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:12.717 14:53:45 -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:12.976 14:53:45 -- target/ns_hotplug_stress.sh@49 -- # null_size=1008 00:13:12.976 14:53:45 -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:13:12.976 true 00:13:12.976 14:53:46 -- target/ns_hotplug_stress.sh@44 -- # kill -0 79486 00:13:12.976 14:53:46 -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:13.234 14:53:46 -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:13.493 14:53:46 -- target/ns_hotplug_stress.sh@49 -- # null_size=1009 00:13:13.493 14:53:46 -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:13:13.752 true 00:13:13.752 14:53:46 -- target/ns_hotplug_stress.sh@44 -- # kill -0 79486 00:13:13.752 14:53:46 -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:14.689 14:53:47 -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:14.949 14:53:47 -- target/ns_hotplug_stress.sh@49 -- # null_size=1010 00:13:14.949 14:53:47 -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:13:14.949 true 00:13:14.949 14:53:48 -- target/ns_hotplug_stress.sh@44 -- # kill -0 79486 00:13:14.949 14:53:48 -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:15.516 14:53:48 -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:15.516 14:53:48 -- target/ns_hotplug_stress.sh@49 -- # null_size=1011 00:13:15.516 14:53:48 -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:13:15.775 true 00:13:15.775 14:53:48 -- target/ns_hotplug_stress.sh@44 -- # kill -0 79486 00:13:15.775 14:53:48 -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:16.712 14:53:49 -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:16.970 14:53:49 -- target/ns_hotplug_stress.sh@49 -- # null_size=1012 00:13:16.970 14:53:49 -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:13:17.229 true 00:13:17.229 14:53:50 -- target/ns_hotplug_stress.sh@44 -- # kill -0 79486 00:13:17.229 14:53:50 -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:17.489 14:53:50 -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:17.747 14:53:50 -- target/ns_hotplug_stress.sh@49 -- # null_size=1013 00:13:17.747 14:53:50 -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:13:17.747 true 00:13:17.747 14:53:50 -- target/ns_hotplug_stress.sh@44 -- # kill -0 79486 00:13:17.747 14:53:50 -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:18.683 14:53:51 -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:18.942 14:53:51 -- target/ns_hotplug_stress.sh@49 -- # null_size=1014 00:13:18.942 14:53:51 -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:13:19.201 true 00:13:19.201 14:53:52 -- target/ns_hotplug_stress.sh@44 -- # kill -0 79486 00:13:19.201 14:53:52 -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:19.459 14:53:52 -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:19.718 14:53:52 -- target/ns_hotplug_stress.sh@49 -- # null_size=1015 00:13:19.718 14:53:52 -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:13:19.976 true 00:13:19.976 14:53:52 -- target/ns_hotplug_stress.sh@44 -- # kill -0 79486 00:13:19.976 14:53:52 -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:20.912 14:53:53 -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:20.912 14:53:53 -- target/ns_hotplug_stress.sh@49 -- # null_size=1016 00:13:20.912 14:53:53 -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:13:21.171 true 00:13:21.171 14:53:54 -- target/ns_hotplug_stress.sh@44 -- # kill -0 79486 00:13:21.171 14:53:54 -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:21.430 14:53:54 -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:21.687 14:53:54 -- target/ns_hotplug_stress.sh@49 -- # null_size=1017 00:13:21.687 14:53:54 -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:13:21.945 true 00:13:21.945 14:53:54 -- target/ns_hotplug_stress.sh@44 -- # kill -0 79486 00:13:21.945 14:53:54 -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:22.879 14:53:55 -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:22.879 14:53:55 -- target/ns_hotplug_stress.sh@49 -- # null_size=1018 00:13:22.879 14:53:55 -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:13:23.138 true 00:13:23.138 14:53:56 -- target/ns_hotplug_stress.sh@44 -- # kill -0 79486 00:13:23.138 14:53:56 -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:23.396 14:53:56 -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:23.654 14:53:56 -- target/ns_hotplug_stress.sh@49 -- # null_size=1019 00:13:23.654 14:53:56 -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:13:23.912 true 00:13:23.912 14:53:56 -- target/ns_hotplug_stress.sh@44 -- # kill -0 79486 00:13:23.912 14:53:56 -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:24.851 14:53:57 -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:25.110 14:53:57 -- target/ns_hotplug_stress.sh@49 -- # null_size=1020 00:13:25.110 14:53:57 -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:13:25.110 true 00:13:25.110 14:53:58 -- target/ns_hotplug_stress.sh@44 -- # kill -0 79486 00:13:25.110 14:53:58 -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:25.369 14:53:58 -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:25.628 14:53:58 -- target/ns_hotplug_stress.sh@49 -- # null_size=1021 00:13:25.628 14:53:58 -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:13:25.892 true 00:13:25.892 14:53:58 -- target/ns_hotplug_stress.sh@44 -- # kill -0 79486 00:13:25.892 14:53:58 -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:26.830 14:53:59 -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:26.830 14:53:59 -- target/ns_hotplug_stress.sh@49 -- # null_size=1022 00:13:26.830 14:53:59 -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:13:27.088 true 00:13:27.347 14:54:00 -- target/ns_hotplug_stress.sh@44 -- # kill -0 79486 00:13:27.347 14:54:00 -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:27.347 14:54:00 -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:27.605 14:54:00 -- target/ns_hotplug_stress.sh@49 -- # null_size=1023 00:13:27.606 14:54:00 -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:13:27.864 true 00:13:27.864 14:54:00 -- target/ns_hotplug_stress.sh@44 -- # kill -0 79486 00:13:27.864 14:54:00 -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:28.802 14:54:01 -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:28.802 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:29.060 14:54:01 -- target/ns_hotplug_stress.sh@49 -- # null_size=1024 00:13:29.060 14:54:01 -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:13:29.320 true 00:13:29.320 14:54:02 -- target/ns_hotplug_stress.sh@44 -- # kill -0 79486 00:13:29.320 14:54:02 -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:29.580 14:54:02 -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:29.839 14:54:02 -- target/ns_hotplug_stress.sh@49 -- # null_size=1025 00:13:29.839 14:54:02 -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:13:30.099 true 00:13:30.099 14:54:02 -- target/ns_hotplug_stress.sh@44 -- # kill -0 79486 00:13:30.099 14:54:02 -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:30.666 14:54:03 -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:30.923 14:54:04 -- target/ns_hotplug_stress.sh@49 -- # null_size=1026 00:13:30.923 14:54:04 -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:13:31.181 true 00:13:31.181 14:54:04 -- target/ns_hotplug_stress.sh@44 -- # kill -0 79486 00:13:31.181 14:54:04 -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:31.439 14:54:04 -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:31.698 14:54:04 -- target/ns_hotplug_stress.sh@49 -- # null_size=1027 00:13:31.698 14:54:04 -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:13:31.956 true 00:13:31.956 14:54:04 -- target/ns_hotplug_stress.sh@44 -- # kill -0 79486 00:13:31.956 14:54:04 -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:32.891 14:54:05 -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:33.149 14:54:06 -- target/ns_hotplug_stress.sh@49 -- # null_size=1028 00:13:33.149 14:54:06 -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:13:33.149 true 00:13:33.149 14:54:06 -- target/ns_hotplug_stress.sh@44 -- # kill -0 79486 00:13:33.149 14:54:06 -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:33.408 14:54:06 -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:33.667 14:54:06 -- target/ns_hotplug_stress.sh@49 -- # null_size=1029 00:13:33.667 14:54:06 -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1029 00:13:33.925 true 00:13:33.925 14:54:06 -- target/ns_hotplug_stress.sh@44 -- # kill -0 79486 00:13:33.925 14:54:06 -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:34.926 Initializing NVMe Controllers 00:13:34.926 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:13:34.926 Controller IO queue size 128, less than required. 00:13:34.926 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:13:34.926 Controller IO queue size 128, less than required. 00:13:34.926 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:13:34.927 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:13:34.927 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:13:34.927 Initialization complete. Launching workers. 00:13:34.927 ======================================================== 00:13:34.927 Latency(us) 00:13:34.927 Device Information : IOPS MiB/s Average min max 00:13:34.927 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 274.56 0.13 265116.47 4327.67 1050785.90 00:13:34.927 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 14315.64 6.99 8940.99 2279.44 516726.59 00:13:34.927 ======================================================== 00:13:34.927 Total : 14590.20 7.12 13761.81 2279.44 1050785.90 00:13:34.927 00:13:34.927 14:54:07 -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:34.927 14:54:07 -- target/ns_hotplug_stress.sh@49 -- # null_size=1030 00:13:34.927 14:54:07 -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1030 00:13:35.185 true 00:13:35.185 14:54:08 -- target/ns_hotplug_stress.sh@44 -- # kill -0 79486 00:13:35.185 /home/vagrant/spdk_repo/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (79486) - No such process 00:13:35.185 14:54:08 -- target/ns_hotplug_stress.sh@53 -- # wait 79486 00:13:35.185 14:54:08 -- target/ns_hotplug_stress.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:35.444 14:54:08 -- target/ns_hotplug_stress.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:13:35.702 14:54:08 -- target/ns_hotplug_stress.sh@58 -- # nthreads=8 00:13:35.702 14:54:08 -- target/ns_hotplug_stress.sh@58 -- # pids=() 00:13:35.702 14:54:08 -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 )) 00:13:35.702 14:54:08 -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:13:35.702 14:54:08 -- target/ns_hotplug_stress.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create null0 100 4096 00:13:35.960 null0 00:13:35.960 14:54:08 -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:13:35.960 14:54:08 -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:13:35.960 14:54:08 -- target/ns_hotplug_stress.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create null1 100 4096 00:13:36.217 null1 00:13:36.217 14:54:09 -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:13:36.217 14:54:09 -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:13:36.217 14:54:09 -- target/ns_hotplug_stress.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create null2 100 4096 00:13:36.217 null2 00:13:36.217 14:54:09 -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:13:36.217 14:54:09 -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:13:36.217 14:54:09 -- target/ns_hotplug_stress.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create null3 100 4096 00:13:36.474 null3 00:13:36.474 14:54:09 -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:13:36.474 14:54:09 -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:13:36.474 14:54:09 -- target/ns_hotplug_stress.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create null4 100 4096 00:13:36.732 null4 00:13:36.732 14:54:09 -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:13:36.732 14:54:09 -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:13:36.732 14:54:09 -- target/ns_hotplug_stress.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create null5 100 4096 00:13:36.990 null5 00:13:36.990 14:54:09 -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:13:36.990 14:54:09 -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:13:36.990 14:54:09 -- target/ns_hotplug_stress.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create null6 100 4096 00:13:37.247 null6 00:13:37.247 14:54:10 -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:13:37.247 14:54:10 -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:13:37.247 14:54:10 -- target/ns_hotplug_stress.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create null7 100 4096 00:13:37.505 null7 00:13:37.505 14:54:10 -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:13:37.505 14:54:10 -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:13:37.505 14:54:10 -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 )) 00:13:37.505 14:54:10 -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:13:37.505 14:54:10 -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:13:37.505 14:54:10 -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0 00:13:37.505 14:54:10 -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0 00:13:37.505 14:54:10 -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:13:37.505 14:54:10 -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:13:37.505 14:54:10 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:37.505 14:54:10 -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:13:37.505 14:54:10 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:13:37.505 14:54:10 -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:13:37.505 14:54:10 -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1 00:13:37.505 14:54:10 -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:13:37.505 14:54:10 -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1 00:13:37.505 14:54:10 -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:13:37.505 14:54:10 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:37.505 14:54:10 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:13:37.505 14:54:10 -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:13:37.505 14:54:10 -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:13:37.505 14:54:10 -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2 00:13:37.505 14:54:10 -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2 00:13:37.505 14:54:10 -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:13:37.505 14:54:10 -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:13:37.505 14:54:10 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:37.505 14:54:10 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:13:37.505 14:54:10 -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:13:37.506 14:54:10 -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:13:37.506 14:54:10 -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3 00:13:37.506 14:54:10 -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:13:37.506 14:54:10 -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:13:37.506 14:54:10 -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3 00:13:37.506 14:54:10 -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:13:37.506 14:54:10 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:37.506 14:54:10 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:13:37.506 14:54:10 -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:13:37.506 14:54:10 -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:13:37.506 14:54:10 -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4 00:13:37.506 14:54:10 -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:13:37.506 14:54:10 -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4 00:13:37.506 14:54:10 -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:13:37.506 14:54:10 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:37.506 14:54:10 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:13:37.506 14:54:10 -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:13:37.506 14:54:10 -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5 00:13:37.506 14:54:10 -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:13:37.506 14:54:10 -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5 00:13:37.506 14:54:10 -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:13:37.506 14:54:10 -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:13:37.506 14:54:10 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:37.506 14:54:10 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:13:37.506 14:54:10 -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:13:37.506 14:54:10 -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6 00:13:37.506 14:54:10 -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6 00:13:37.506 14:54:10 -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:13:37.506 14:54:10 -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:13:37.506 14:54:10 -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:13:37.506 14:54:10 -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:13:37.506 14:54:10 -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:13:37.506 14:54:10 -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:13:37.506 14:54:10 -- target/ns_hotplug_stress.sh@66 -- # wait 80539 80540 80543 80545 80546 80548 80551 80552 00:13:37.506 14:54:10 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:37.506 14:54:10 -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7 00:13:37.506 14:54:10 -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7 00:13:37.506 14:54:10 -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:13:37.506 14:54:10 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:37.506 14:54:10 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:13:37.506 14:54:10 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:13:37.763 14:54:10 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:37.763 14:54:10 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:13:37.763 14:54:10 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:13:37.763 14:54:10 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:37.763 14:54:10 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:13:37.763 14:54:10 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:13:37.763 14:54:10 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:13:37.763 14:54:10 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:13:38.021 14:54:10 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:38.021 14:54:10 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:38.021 14:54:10 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:13:38.021 14:54:10 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:38.021 14:54:10 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:38.021 14:54:10 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:13:38.021 14:54:10 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:38.021 14:54:10 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:38.021 14:54:10 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:13:38.021 14:54:10 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:38.021 14:54:10 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:38.021 14:54:10 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:13:38.021 14:54:11 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:38.021 14:54:11 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:38.021 14:54:11 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:13:38.021 14:54:11 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:38.021 14:54:11 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:38.021 14:54:11 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:13:38.021 14:54:11 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:38.021 14:54:11 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:38.021 14:54:11 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:13:38.278 14:54:11 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:38.278 14:54:11 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:38.278 14:54:11 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:13:38.278 14:54:11 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:13:38.278 14:54:11 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:13:38.278 14:54:11 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:13:38.278 14:54:11 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:38.278 14:54:11 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:13:38.278 14:54:11 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:38.535 14:54:11 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:13:38.535 14:54:11 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:13:38.535 14:54:11 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:38.536 14:54:11 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:38.536 14:54:11 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:13:38.536 14:54:11 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:38.536 14:54:11 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:38.536 14:54:11 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:13:38.536 14:54:11 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:38.536 14:54:11 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:38.536 14:54:11 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:13:38.536 14:54:11 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:38.536 14:54:11 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:38.536 14:54:11 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:13:38.536 14:54:11 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:38.536 14:54:11 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:38.536 14:54:11 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:13:38.536 14:54:11 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:38.536 14:54:11 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:38.536 14:54:11 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:13:38.793 14:54:11 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:38.793 14:54:11 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:38.793 14:54:11 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:13:38.793 14:54:11 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:38.793 14:54:11 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:38.793 14:54:11 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:13:38.793 14:54:11 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:13:38.793 14:54:11 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:13:38.794 14:54:11 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:38.794 14:54:11 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:13:38.794 14:54:11 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:38.794 14:54:11 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:38.794 14:54:11 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:38.794 14:54:11 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:13:38.794 14:54:11 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:13:39.053 14:54:11 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:13:39.053 14:54:11 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:39.053 14:54:11 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:39.053 14:54:11 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:13:39.053 14:54:11 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:13:39.053 14:54:12 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:39.053 14:54:12 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:39.053 14:54:12 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:39.053 14:54:12 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:39.053 14:54:12 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:13:39.053 14:54:12 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:13:39.053 14:54:12 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:39.053 14:54:12 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:39.053 14:54:12 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:13:39.053 14:54:12 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:39.053 14:54:12 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:39.053 14:54:12 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:13:39.053 14:54:12 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:13:39.312 14:54:12 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:39.312 14:54:12 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:39.312 14:54:12 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:13:39.312 14:54:12 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:39.312 14:54:12 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:39.312 14:54:12 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:13:39.312 14:54:12 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:13:39.312 14:54:12 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:13:39.312 14:54:12 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:13:39.312 14:54:12 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:39.312 14:54:12 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:39.312 14:54:12 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:39.312 14:54:12 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:13:39.312 14:54:12 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:39.571 14:54:12 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:13:39.571 14:54:12 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:13:39.571 14:54:12 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:39.571 14:54:12 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:39.571 14:54:12 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:13:39.571 14:54:12 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:39.571 14:54:12 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:39.571 14:54:12 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:13:39.571 14:54:12 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:39.571 14:54:12 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:39.571 14:54:12 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:13:39.571 14:54:12 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:39.571 14:54:12 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:39.571 14:54:12 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:13:39.571 14:54:12 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:13:39.571 14:54:12 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:39.571 14:54:12 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:39.571 14:54:12 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:13:39.829 14:54:12 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:39.829 14:54:12 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:39.829 14:54:12 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:13:39.829 14:54:12 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:39.829 14:54:12 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:39.829 14:54:12 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:13:39.829 14:54:12 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:13:39.829 14:54:12 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:13:39.829 14:54:12 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:39.829 14:54:12 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:39.829 14:54:12 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:13:39.829 14:54:12 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:13:39.829 14:54:12 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:39.829 14:54:12 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:40.088 14:54:12 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:13:40.088 14:54:13 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:40.088 14:54:13 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:40.088 14:54:13 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:13:40.088 14:54:13 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:13:40.088 14:54:13 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:40.088 14:54:13 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:40.088 14:54:13 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:13:40.088 14:54:13 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:40.088 14:54:13 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:40.088 14:54:13 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:13:40.088 14:54:13 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:40.088 14:54:13 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:40.088 14:54:13 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:13:40.088 14:54:13 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:13:40.088 14:54:13 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:40.088 14:54:13 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:40.088 14:54:13 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:13:40.088 14:54:13 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:40.088 14:54:13 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:40.088 14:54:13 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:13:40.348 14:54:13 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:13:40.348 14:54:13 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:40.348 14:54:13 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:40.348 14:54:13 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:13:40.348 14:54:13 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:13:40.348 14:54:13 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:13:40.348 14:54:13 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:40.348 14:54:13 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:40.348 14:54:13 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:40.348 14:54:13 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:13:40.607 14:54:13 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:40.607 14:54:13 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:13:40.607 14:54:13 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:40.607 14:54:13 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:40.607 14:54:13 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:13:40.607 14:54:13 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:13:40.607 14:54:13 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:40.607 14:54:13 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:40.607 14:54:13 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:13:40.607 14:54:13 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:40.607 14:54:13 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:40.607 14:54:13 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:13:40.607 14:54:13 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:40.607 14:54:13 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:40.607 14:54:13 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:13:40.607 14:54:13 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:13:40.607 14:54:13 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:40.607 14:54:13 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:40.607 14:54:13 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:13:40.866 14:54:13 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:13:40.866 14:54:13 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:40.866 14:54:13 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:40.866 14:54:13 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:13:40.866 14:54:13 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:40.866 14:54:13 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:40.866 14:54:13 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:13:40.866 14:54:13 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:13:40.866 14:54:13 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:13:40.866 14:54:13 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:40.866 14:54:13 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:40.866 14:54:13 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:13:40.866 14:54:13 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:40.866 14:54:13 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:41.125 14:54:14 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:41.125 14:54:14 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:41.125 14:54:14 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:13:41.125 14:54:14 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:41.125 14:54:14 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:41.125 14:54:14 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:13:41.125 14:54:14 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:41.125 14:54:14 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:41.125 14:54:14 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:13:41.125 14:54:14 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:13:41.125 14:54:14 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:13:41.125 14:54:14 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:13:41.125 14:54:14 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:41.125 14:54:14 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:41.125 14:54:14 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:13:41.125 14:54:14 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:41.125 14:54:14 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:41.125 14:54:14 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:13:41.383 14:54:14 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:13:41.383 14:54:14 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:41.383 14:54:14 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:41.384 14:54:14 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:13:41.384 14:54:14 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:13:41.384 14:54:14 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:13:41.384 14:54:14 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:41.384 14:54:14 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:41.384 14:54:14 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:13:41.384 14:54:14 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:41.384 14:54:14 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:41.384 14:54:14 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:13:41.384 14:54:14 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:41.384 14:54:14 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:41.642 14:54:14 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:41.642 14:54:14 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:41.642 14:54:14 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:13:41.642 14:54:14 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:41.642 14:54:14 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:41.642 14:54:14 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:13:41.643 14:54:14 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:13:41.643 14:54:14 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:41.643 14:54:14 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:41.643 14:54:14 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:13:41.643 14:54:14 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:13:41.643 14:54:14 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:13:41.643 14:54:14 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:41.643 14:54:14 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:41.643 14:54:14 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:13:41.643 14:54:14 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:41.643 14:54:14 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:41.643 14:54:14 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:13:41.902 14:54:14 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:13:41.902 14:54:14 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:13:41.902 14:54:14 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:41.902 14:54:14 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:41.902 14:54:14 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:13:41.902 14:54:14 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:13:41.902 14:54:14 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:41.902 14:54:14 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:41.902 14:54:14 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:13:41.902 14:54:14 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:41.902 14:54:14 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:41.902 14:54:14 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:13:41.902 14:54:14 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:42.160 14:54:15 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:42.160 14:54:15 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:42.160 14:54:15 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:13:42.160 14:54:15 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:42.160 14:54:15 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:42.160 14:54:15 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:42.160 14:54:15 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:13:42.160 14:54:15 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:13:42.160 14:54:15 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:42.160 14:54:15 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:42.160 14:54:15 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:13:42.160 14:54:15 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:13:42.160 14:54:15 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:13:42.160 14:54:15 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:42.160 14:54:15 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:42.160 14:54:15 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:13:42.418 14:54:15 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:13:42.418 14:54:15 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:42.418 14:54:15 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:42.418 14:54:15 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:13:42.418 14:54:15 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:42.418 14:54:15 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:42.418 14:54:15 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:13:42.418 14:54:15 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:13:42.418 14:54:15 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:13:42.418 14:54:15 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:42.418 14:54:15 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:42.418 14:54:15 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:13:42.418 14:54:15 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:42.418 14:54:15 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:42.418 14:54:15 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:42.418 14:54:15 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:42.418 14:54:15 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:13:42.676 14:54:15 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:42.676 14:54:15 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:42.676 14:54:15 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:42.676 14:54:15 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:42.676 14:54:15 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:13:42.676 14:54:15 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:42.676 14:54:15 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:42.676 14:54:15 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:42.676 14:54:15 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:42.676 14:54:15 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:42.676 14:54:15 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:42.676 14:54:15 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:42.676 14:54:15 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:42.936 14:54:15 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:42.936 14:54:15 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:42.936 14:54:15 -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:13:42.936 14:54:15 -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini 00:13:42.936 14:54:15 -- nvmf/common.sh@476 -- # nvmfcleanup 00:13:42.936 14:54:15 -- nvmf/common.sh@116 -- # sync 00:13:42.936 14:54:15 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:13:42.936 14:54:15 -- nvmf/common.sh@119 -- # set +e 00:13:42.936 14:54:15 -- nvmf/common.sh@120 -- # for i in {1..20} 00:13:42.936 14:54:15 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:13:42.936 rmmod nvme_tcp 00:13:42.936 rmmod nvme_fabrics 00:13:42.936 rmmod nvme_keyring 00:13:42.936 14:54:15 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:13:42.936 14:54:15 -- nvmf/common.sh@123 -- # set -e 00:13:42.936 14:54:15 -- nvmf/common.sh@124 -- # return 0 00:13:42.936 14:54:15 -- nvmf/common.sh@477 -- # '[' -n 79355 ']' 00:13:42.936 14:54:15 -- nvmf/common.sh@478 -- # killprocess 79355 00:13:42.936 14:54:15 -- common/autotest_common.sh@936 -- # '[' -z 79355 ']' 00:13:42.936 14:54:15 -- common/autotest_common.sh@940 -- # kill -0 79355 00:13:42.936 14:54:15 -- common/autotest_common.sh@941 -- # uname 00:13:42.936 14:54:15 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:13:42.936 14:54:15 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 79355 00:13:42.936 14:54:15 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:13:42.936 14:54:15 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:13:42.936 killing process with pid 79355 00:13:42.936 14:54:15 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 79355' 00:13:42.936 14:54:15 -- common/autotest_common.sh@955 -- # kill 79355 00:13:42.936 14:54:15 -- common/autotest_common.sh@960 -- # wait 79355 00:13:43.196 14:54:16 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:13:43.196 14:54:16 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:13:43.196 14:54:16 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:13:43.196 14:54:16 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:13:43.196 14:54:16 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:13:43.196 14:54:16 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:43.196 14:54:16 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:43.196 14:54:16 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:43.196 14:54:16 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:13:43.196 00:13:43.196 real 0m42.669s 00:13:43.196 user 3m23.214s 00:13:43.196 sys 0m11.950s 00:13:43.196 14:54:16 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:13:43.196 14:54:16 -- common/autotest_common.sh@10 -- # set +x 00:13:43.196 ************************************ 00:13:43.196 END TEST nvmf_ns_hotplug_stress 00:13:43.196 ************************************ 00:13:43.196 14:54:16 -- nvmf/nvmf.sh@33 -- # run_test nvmf_connect_stress /home/vagrant/spdk_repo/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:13:43.196 14:54:16 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:13:43.196 14:54:16 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:13:43.196 14:54:16 -- common/autotest_common.sh@10 -- # set +x 00:13:43.196 ************************************ 00:13:43.196 START TEST nvmf_connect_stress 00:13:43.196 ************************************ 00:13:43.196 14:54:16 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:13:43.196 * Looking for test storage... 00:13:43.196 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:13:43.196 14:54:16 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:13:43.196 14:54:16 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:13:43.196 14:54:16 -- common/autotest_common.sh@1690 -- # lcov --version 00:13:43.456 14:54:16 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:13:43.456 14:54:16 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:13:43.456 14:54:16 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:13:43.456 14:54:16 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:13:43.456 14:54:16 -- scripts/common.sh@335 -- # IFS=.-: 00:13:43.456 14:54:16 -- scripts/common.sh@335 -- # read -ra ver1 00:13:43.456 14:54:16 -- scripts/common.sh@336 -- # IFS=.-: 00:13:43.456 14:54:16 -- scripts/common.sh@336 -- # read -ra ver2 00:13:43.456 14:54:16 -- scripts/common.sh@337 -- # local 'op=<' 00:13:43.456 14:54:16 -- scripts/common.sh@339 -- # ver1_l=2 00:13:43.456 14:54:16 -- scripts/common.sh@340 -- # ver2_l=1 00:13:43.456 14:54:16 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:13:43.456 14:54:16 -- scripts/common.sh@343 -- # case "$op" in 00:13:43.456 14:54:16 -- scripts/common.sh@344 -- # : 1 00:13:43.456 14:54:16 -- scripts/common.sh@363 -- # (( v = 0 )) 00:13:43.456 14:54:16 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:43.456 14:54:16 -- scripts/common.sh@364 -- # decimal 1 00:13:43.456 14:54:16 -- scripts/common.sh@352 -- # local d=1 00:13:43.456 14:54:16 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:43.456 14:54:16 -- scripts/common.sh@354 -- # echo 1 00:13:43.456 14:54:16 -- scripts/common.sh@364 -- # ver1[v]=1 00:13:43.456 14:54:16 -- scripts/common.sh@365 -- # decimal 2 00:13:43.456 14:54:16 -- scripts/common.sh@352 -- # local d=2 00:13:43.456 14:54:16 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:43.456 14:54:16 -- scripts/common.sh@354 -- # echo 2 00:13:43.456 14:54:16 -- scripts/common.sh@365 -- # ver2[v]=2 00:13:43.456 14:54:16 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:13:43.456 14:54:16 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:13:43.456 14:54:16 -- scripts/common.sh@367 -- # return 0 00:13:43.456 14:54:16 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:43.456 14:54:16 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:13:43.456 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:43.456 --rc genhtml_branch_coverage=1 00:13:43.456 --rc genhtml_function_coverage=1 00:13:43.456 --rc genhtml_legend=1 00:13:43.456 --rc geninfo_all_blocks=1 00:13:43.456 --rc geninfo_unexecuted_blocks=1 00:13:43.456 00:13:43.456 ' 00:13:43.456 14:54:16 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:13:43.456 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:43.457 --rc genhtml_branch_coverage=1 00:13:43.457 --rc genhtml_function_coverage=1 00:13:43.457 --rc genhtml_legend=1 00:13:43.457 --rc geninfo_all_blocks=1 00:13:43.457 --rc geninfo_unexecuted_blocks=1 00:13:43.457 00:13:43.457 ' 00:13:43.457 14:54:16 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:13:43.457 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:43.457 --rc genhtml_branch_coverage=1 00:13:43.457 --rc genhtml_function_coverage=1 00:13:43.457 --rc genhtml_legend=1 00:13:43.457 --rc geninfo_all_blocks=1 00:13:43.457 --rc geninfo_unexecuted_blocks=1 00:13:43.457 00:13:43.457 ' 00:13:43.457 14:54:16 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:13:43.457 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:43.457 --rc genhtml_branch_coverage=1 00:13:43.457 --rc genhtml_function_coverage=1 00:13:43.457 --rc genhtml_legend=1 00:13:43.457 --rc geninfo_all_blocks=1 00:13:43.457 --rc geninfo_unexecuted_blocks=1 00:13:43.457 00:13:43.457 ' 00:13:43.457 14:54:16 -- target/connect_stress.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:13:43.457 14:54:16 -- nvmf/common.sh@7 -- # uname -s 00:13:43.457 14:54:16 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:43.457 14:54:16 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:43.457 14:54:16 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:43.457 14:54:16 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:43.457 14:54:16 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:43.457 14:54:16 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:43.457 14:54:16 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:43.457 14:54:16 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:43.457 14:54:16 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:43.457 14:54:16 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:43.457 14:54:16 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:2d843004-a791-47f3-8dd7-3d04462c368b 00:13:43.457 14:54:16 -- nvmf/common.sh@18 -- # NVME_HOSTID=2d843004-a791-47f3-8dd7-3d04462c368b 00:13:43.457 14:54:16 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:43.457 14:54:16 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:43.457 14:54:16 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:13:43.457 14:54:16 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:13:43.457 14:54:16 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:43.457 14:54:16 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:43.457 14:54:16 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:43.457 14:54:16 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:43.457 14:54:16 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:43.457 14:54:16 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:43.457 14:54:16 -- paths/export.sh@5 -- # export PATH 00:13:43.457 14:54:16 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:43.457 14:54:16 -- nvmf/common.sh@46 -- # : 0 00:13:43.457 14:54:16 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:13:43.457 14:54:16 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:13:43.457 14:54:16 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:13:43.457 14:54:16 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:43.457 14:54:16 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:43.457 14:54:16 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:13:43.457 14:54:16 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:13:43.457 14:54:16 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:13:43.457 14:54:16 -- target/connect_stress.sh@12 -- # nvmftestinit 00:13:43.457 14:54:16 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:13:43.457 14:54:16 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:43.457 14:54:16 -- nvmf/common.sh@436 -- # prepare_net_devs 00:13:43.457 14:54:16 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:13:43.457 14:54:16 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:13:43.457 14:54:16 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:43.457 14:54:16 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:43.457 14:54:16 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:43.457 14:54:16 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:13:43.457 14:54:16 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:13:43.457 14:54:16 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:13:43.457 14:54:16 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:13:43.457 14:54:16 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:13:43.457 14:54:16 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:13:43.457 14:54:16 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:43.457 14:54:16 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:43.457 14:54:16 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:13:43.457 14:54:16 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:13:43.457 14:54:16 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:13:43.457 14:54:16 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:13:43.457 14:54:16 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:13:43.457 14:54:16 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:43.457 14:54:16 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:13:43.457 14:54:16 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:13:43.457 14:54:16 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:13:43.457 14:54:16 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:13:43.457 14:54:16 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:13:43.457 14:54:16 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:13:43.457 Cannot find device "nvmf_tgt_br" 00:13:43.457 14:54:16 -- nvmf/common.sh@154 -- # true 00:13:43.457 14:54:16 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:13:43.457 Cannot find device "nvmf_tgt_br2" 00:13:43.457 14:54:16 -- nvmf/common.sh@155 -- # true 00:13:43.457 14:54:16 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:13:43.457 14:54:16 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:13:43.457 Cannot find device "nvmf_tgt_br" 00:13:43.457 14:54:16 -- nvmf/common.sh@157 -- # true 00:13:43.457 14:54:16 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:13:43.457 Cannot find device "nvmf_tgt_br2" 00:13:43.457 14:54:16 -- nvmf/common.sh@158 -- # true 00:13:43.457 14:54:16 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:13:43.457 14:54:16 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:13:43.457 14:54:16 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:13:43.457 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:13:43.457 14:54:16 -- nvmf/common.sh@161 -- # true 00:13:43.457 14:54:16 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:13:43.457 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:13:43.457 14:54:16 -- nvmf/common.sh@162 -- # true 00:13:43.457 14:54:16 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:13:43.457 14:54:16 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:13:43.457 14:54:16 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:13:43.457 14:54:16 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:13:43.457 14:54:16 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:13:43.716 14:54:16 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:13:43.716 14:54:16 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:13:43.716 14:54:16 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:13:43.716 14:54:16 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:13:43.716 14:54:16 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:13:43.716 14:54:16 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:13:43.716 14:54:16 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:13:43.716 14:54:16 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:13:43.717 14:54:16 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:13:43.717 14:54:16 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:13:43.717 14:54:16 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:13:43.717 14:54:16 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:13:43.717 14:54:16 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:13:43.717 14:54:16 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:13:43.717 14:54:16 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:13:43.717 14:54:16 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:13:43.717 14:54:16 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:13:43.717 14:54:16 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:13:43.717 14:54:16 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:13:43.717 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:43.717 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.043 ms 00:13:43.717 00:13:43.717 --- 10.0.0.2 ping statistics --- 00:13:43.717 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:43.717 rtt min/avg/max/mdev = 0.043/0.043/0.043/0.000 ms 00:13:43.717 14:54:16 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:13:43.717 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:13:43.717 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.053 ms 00:13:43.717 00:13:43.717 --- 10.0.0.3 ping statistics --- 00:13:43.717 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:43.717 rtt min/avg/max/mdev = 0.053/0.053/0.053/0.000 ms 00:13:43.717 14:54:16 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:13:43.717 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:43.717 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.035 ms 00:13:43.717 00:13:43.717 --- 10.0.0.1 ping statistics --- 00:13:43.717 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:43.717 rtt min/avg/max/mdev = 0.035/0.035/0.035/0.000 ms 00:13:43.717 14:54:16 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:43.717 14:54:16 -- nvmf/common.sh@421 -- # return 0 00:13:43.717 14:54:16 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:13:43.717 14:54:16 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:43.717 14:54:16 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:13:43.717 14:54:16 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:13:43.717 14:54:16 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:43.717 14:54:16 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:13:43.717 14:54:16 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:13:43.717 14:54:16 -- target/connect_stress.sh@13 -- # nvmfappstart -m 0xE 00:13:43.717 14:54:16 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:13:43.717 14:54:16 -- common/autotest_common.sh@722 -- # xtrace_disable 00:13:43.717 14:54:16 -- common/autotest_common.sh@10 -- # set +x 00:13:43.717 14:54:16 -- nvmf/common.sh@469 -- # nvmfpid=81861 00:13:43.717 14:54:16 -- nvmf/common.sh@470 -- # waitforlisten 81861 00:13:43.717 14:54:16 -- common/autotest_common.sh@829 -- # '[' -z 81861 ']' 00:13:43.717 14:54:16 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:43.717 14:54:16 -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:43.717 14:54:16 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:13:43.717 14:54:16 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:43.717 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:43.717 14:54:16 -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:43.717 14:54:16 -- common/autotest_common.sh@10 -- # set +x 00:13:43.717 [2024-12-01 14:54:16.788893] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:13:43.717 [2024-12-01 14:54:16.789430] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:43.976 [2024-12-01 14:54:16.929951] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:13:43.976 [2024-12-01 14:54:16.986530] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:13:43.976 [2024-12-01 14:54:16.986666] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:43.976 [2024-12-01 14:54:16.986677] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:43.976 [2024-12-01 14:54:16.986687] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:43.976 [2024-12-01 14:54:16.986867] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:13:43.976 [2024-12-01 14:54:16.987782] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:13:43.976 [2024-12-01 14:54:16.987807] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:13:44.913 14:54:17 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:44.913 14:54:17 -- common/autotest_common.sh@862 -- # return 0 00:13:44.913 14:54:17 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:13:44.913 14:54:17 -- common/autotest_common.sh@728 -- # xtrace_disable 00:13:44.913 14:54:17 -- common/autotest_common.sh@10 -- # set +x 00:13:44.913 14:54:17 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:44.913 14:54:17 -- target/connect_stress.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:13:44.913 14:54:17 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:44.913 14:54:17 -- common/autotest_common.sh@10 -- # set +x 00:13:44.913 [2024-12-01 14:54:17.777016] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:44.913 14:54:17 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:44.913 14:54:17 -- target/connect_stress.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:13:44.913 14:54:17 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:44.913 14:54:17 -- common/autotest_common.sh@10 -- # set +x 00:13:44.913 14:54:17 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:44.913 14:54:17 -- target/connect_stress.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:44.913 14:54:17 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:44.913 14:54:17 -- common/autotest_common.sh@10 -- # set +x 00:13:44.914 [2024-12-01 14:54:17.794929] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:44.914 14:54:17 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:44.914 14:54:17 -- target/connect_stress.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:13:44.914 14:54:17 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:44.914 14:54:17 -- common/autotest_common.sh@10 -- # set +x 00:13:44.914 NULL1 00:13:44.914 14:54:17 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:44.914 14:54:17 -- target/connect_stress.sh@21 -- # PERF_PID=81913 00:13:44.914 14:54:17 -- target/connect_stress.sh@23 -- # rpcs=/home/vagrant/spdk_repo/spdk/test/nvmf/target/rpc.txt 00:13:44.914 14:54:17 -- target/connect_stress.sh@20 -- # /home/vagrant/spdk_repo/spdk/test/nvme/connect_stress/connect_stress -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -t 10 00:13:44.914 14:54:17 -- target/connect_stress.sh@25 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpc.txt 00:13:44.914 14:54:17 -- target/connect_stress.sh@27 -- # seq 1 20 00:13:44.914 14:54:17 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:44.914 14:54:17 -- target/connect_stress.sh@28 -- # cat 00:13:44.914 14:54:17 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:44.914 14:54:17 -- target/connect_stress.sh@28 -- # cat 00:13:44.914 14:54:17 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:44.914 14:54:17 -- target/connect_stress.sh@28 -- # cat 00:13:44.914 14:54:17 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:44.914 14:54:17 -- target/connect_stress.sh@28 -- # cat 00:13:44.914 14:54:17 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:44.914 14:54:17 -- target/connect_stress.sh@28 -- # cat 00:13:44.914 14:54:17 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:44.914 14:54:17 -- target/connect_stress.sh@28 -- # cat 00:13:44.914 14:54:17 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:44.914 14:54:17 -- target/connect_stress.sh@28 -- # cat 00:13:44.914 14:54:17 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:44.914 14:54:17 -- target/connect_stress.sh@28 -- # cat 00:13:44.914 14:54:17 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:44.914 14:54:17 -- target/connect_stress.sh@28 -- # cat 00:13:44.914 14:54:17 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:44.914 14:54:17 -- target/connect_stress.sh@28 -- # cat 00:13:44.914 14:54:17 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:44.914 14:54:17 -- target/connect_stress.sh@28 -- # cat 00:13:44.914 14:54:17 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:44.914 14:54:17 -- target/connect_stress.sh@28 -- # cat 00:13:44.914 14:54:17 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:44.914 14:54:17 -- target/connect_stress.sh@28 -- # cat 00:13:44.914 14:54:17 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:44.914 14:54:17 -- target/connect_stress.sh@28 -- # cat 00:13:44.914 14:54:17 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:44.914 14:54:17 -- target/connect_stress.sh@28 -- # cat 00:13:44.914 14:54:17 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:44.914 14:54:17 -- target/connect_stress.sh@28 -- # cat 00:13:44.914 14:54:17 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:44.914 14:54:17 -- target/connect_stress.sh@28 -- # cat 00:13:44.914 14:54:17 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:44.914 14:54:17 -- target/connect_stress.sh@28 -- # cat 00:13:44.914 14:54:17 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:44.914 14:54:17 -- target/connect_stress.sh@28 -- # cat 00:13:44.914 14:54:17 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:44.914 14:54:17 -- target/connect_stress.sh@28 -- # cat 00:13:44.914 14:54:17 -- target/connect_stress.sh@34 -- # kill -0 81913 00:13:44.914 14:54:17 -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:44.914 14:54:17 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:44.914 14:54:17 -- common/autotest_common.sh@10 -- # set +x 00:13:45.173 14:54:18 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:45.173 14:54:18 -- target/connect_stress.sh@34 -- # kill -0 81913 00:13:45.173 14:54:18 -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:45.173 14:54:18 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:45.173 14:54:18 -- common/autotest_common.sh@10 -- # set +x 00:13:45.432 14:54:18 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:45.432 14:54:18 -- target/connect_stress.sh@34 -- # kill -0 81913 00:13:45.432 14:54:18 -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:45.432 14:54:18 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:45.432 14:54:18 -- common/autotest_common.sh@10 -- # set +x 00:13:46.000 14:54:18 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:46.000 14:54:18 -- target/connect_stress.sh@34 -- # kill -0 81913 00:13:46.000 14:54:18 -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:46.000 14:54:18 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:46.000 14:54:18 -- common/autotest_common.sh@10 -- # set +x 00:13:46.259 14:54:19 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:46.259 14:54:19 -- target/connect_stress.sh@34 -- # kill -0 81913 00:13:46.259 14:54:19 -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:46.259 14:54:19 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:46.259 14:54:19 -- common/autotest_common.sh@10 -- # set +x 00:13:46.518 14:54:19 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:46.518 14:54:19 -- target/connect_stress.sh@34 -- # kill -0 81913 00:13:46.518 14:54:19 -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:46.518 14:54:19 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:46.518 14:54:19 -- common/autotest_common.sh@10 -- # set +x 00:13:46.778 14:54:19 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:46.778 14:54:19 -- target/connect_stress.sh@34 -- # kill -0 81913 00:13:46.778 14:54:19 -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:46.778 14:54:19 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:46.778 14:54:19 -- common/autotest_common.sh@10 -- # set +x 00:13:47.037 14:54:20 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:47.037 14:54:20 -- target/connect_stress.sh@34 -- # kill -0 81913 00:13:47.037 14:54:20 -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:47.037 14:54:20 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:47.037 14:54:20 -- common/autotest_common.sh@10 -- # set +x 00:13:47.604 14:54:20 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:47.604 14:54:20 -- target/connect_stress.sh@34 -- # kill -0 81913 00:13:47.604 14:54:20 -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:47.605 14:54:20 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:47.605 14:54:20 -- common/autotest_common.sh@10 -- # set +x 00:13:47.864 14:54:20 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:47.864 14:54:20 -- target/connect_stress.sh@34 -- # kill -0 81913 00:13:47.864 14:54:20 -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:47.864 14:54:20 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:47.864 14:54:20 -- common/autotest_common.sh@10 -- # set +x 00:13:48.124 14:54:21 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:48.124 14:54:21 -- target/connect_stress.sh@34 -- # kill -0 81913 00:13:48.124 14:54:21 -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:48.124 14:54:21 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:48.124 14:54:21 -- common/autotest_common.sh@10 -- # set +x 00:13:48.383 14:54:21 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:48.383 14:54:21 -- target/connect_stress.sh@34 -- # kill -0 81913 00:13:48.383 14:54:21 -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:48.383 14:54:21 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:48.383 14:54:21 -- common/autotest_common.sh@10 -- # set +x 00:13:48.950 14:54:21 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:48.950 14:54:21 -- target/connect_stress.sh@34 -- # kill -0 81913 00:13:48.950 14:54:21 -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:48.950 14:54:21 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:48.950 14:54:21 -- common/autotest_common.sh@10 -- # set +x 00:13:49.210 14:54:22 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:49.210 14:54:22 -- target/connect_stress.sh@34 -- # kill -0 81913 00:13:49.210 14:54:22 -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:49.210 14:54:22 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:49.210 14:54:22 -- common/autotest_common.sh@10 -- # set +x 00:13:49.469 14:54:22 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:49.469 14:54:22 -- target/connect_stress.sh@34 -- # kill -0 81913 00:13:49.469 14:54:22 -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:49.469 14:54:22 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:49.469 14:54:22 -- common/autotest_common.sh@10 -- # set +x 00:13:49.728 14:54:22 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:49.728 14:54:22 -- target/connect_stress.sh@34 -- # kill -0 81913 00:13:49.728 14:54:22 -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:49.728 14:54:22 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:49.728 14:54:22 -- common/autotest_common.sh@10 -- # set +x 00:13:49.987 14:54:23 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:49.987 14:54:23 -- target/connect_stress.sh@34 -- # kill -0 81913 00:13:49.987 14:54:23 -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:49.987 14:54:23 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:49.987 14:54:23 -- common/autotest_common.sh@10 -- # set +x 00:13:50.554 14:54:23 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:50.554 14:54:23 -- target/connect_stress.sh@34 -- # kill -0 81913 00:13:50.554 14:54:23 -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:50.554 14:54:23 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:50.554 14:54:23 -- common/autotest_common.sh@10 -- # set +x 00:13:50.814 14:54:23 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:50.814 14:54:23 -- target/connect_stress.sh@34 -- # kill -0 81913 00:13:50.814 14:54:23 -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:50.814 14:54:23 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:50.814 14:54:23 -- common/autotest_common.sh@10 -- # set +x 00:13:51.073 14:54:24 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:51.073 14:54:24 -- target/connect_stress.sh@34 -- # kill -0 81913 00:13:51.073 14:54:24 -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:51.073 14:54:24 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:51.073 14:54:24 -- common/autotest_common.sh@10 -- # set +x 00:13:51.332 14:54:24 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:51.332 14:54:24 -- target/connect_stress.sh@34 -- # kill -0 81913 00:13:51.332 14:54:24 -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:51.332 14:54:24 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:51.332 14:54:24 -- common/autotest_common.sh@10 -- # set +x 00:13:51.591 14:54:24 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:51.591 14:54:24 -- target/connect_stress.sh@34 -- # kill -0 81913 00:13:51.591 14:54:24 -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:51.591 14:54:24 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:51.591 14:54:24 -- common/autotest_common.sh@10 -- # set +x 00:13:52.162 14:54:24 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:52.162 14:54:24 -- target/connect_stress.sh@34 -- # kill -0 81913 00:13:52.162 14:54:24 -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:52.162 14:54:24 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:52.162 14:54:24 -- common/autotest_common.sh@10 -- # set +x 00:13:52.419 14:54:25 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:52.419 14:54:25 -- target/connect_stress.sh@34 -- # kill -0 81913 00:13:52.419 14:54:25 -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:52.419 14:54:25 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:52.419 14:54:25 -- common/autotest_common.sh@10 -- # set +x 00:13:52.677 14:54:25 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:52.677 14:54:25 -- target/connect_stress.sh@34 -- # kill -0 81913 00:13:52.677 14:54:25 -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:52.677 14:54:25 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:52.677 14:54:25 -- common/autotest_common.sh@10 -- # set +x 00:13:52.936 14:54:25 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:52.936 14:54:25 -- target/connect_stress.sh@34 -- # kill -0 81913 00:13:52.936 14:54:25 -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:52.936 14:54:25 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:52.936 14:54:25 -- common/autotest_common.sh@10 -- # set +x 00:13:53.195 14:54:26 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:53.195 14:54:26 -- target/connect_stress.sh@34 -- # kill -0 81913 00:13:53.195 14:54:26 -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:53.195 14:54:26 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:53.195 14:54:26 -- common/autotest_common.sh@10 -- # set +x 00:13:53.762 14:54:26 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:53.763 14:54:26 -- target/connect_stress.sh@34 -- # kill -0 81913 00:13:53.763 14:54:26 -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:53.763 14:54:26 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:53.763 14:54:26 -- common/autotest_common.sh@10 -- # set +x 00:13:54.022 14:54:26 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:54.022 14:54:26 -- target/connect_stress.sh@34 -- # kill -0 81913 00:13:54.022 14:54:26 -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:54.022 14:54:26 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:54.022 14:54:26 -- common/autotest_common.sh@10 -- # set +x 00:13:54.281 14:54:27 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:54.281 14:54:27 -- target/connect_stress.sh@34 -- # kill -0 81913 00:13:54.281 14:54:27 -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:54.281 14:54:27 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:54.281 14:54:27 -- common/autotest_common.sh@10 -- # set +x 00:13:54.541 14:54:27 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:54.541 14:54:27 -- target/connect_stress.sh@34 -- # kill -0 81913 00:13:54.541 14:54:27 -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:54.541 14:54:27 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:54.541 14:54:27 -- common/autotest_common.sh@10 -- # set +x 00:13:54.800 14:54:27 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:54.800 14:54:27 -- target/connect_stress.sh@34 -- # kill -0 81913 00:13:54.800 14:54:27 -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:54.800 14:54:27 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:54.800 14:54:27 -- common/autotest_common.sh@10 -- # set +x 00:13:55.058 Testing NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:13:55.318 14:54:28 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:55.318 14:54:28 -- target/connect_stress.sh@34 -- # kill -0 81913 00:13:55.318 /home/vagrant/spdk_repo/spdk/test/nvmf/target/connect_stress.sh: line 34: kill: (81913) - No such process 00:13:55.318 14:54:28 -- target/connect_stress.sh@38 -- # wait 81913 00:13:55.318 14:54:28 -- target/connect_stress.sh@39 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpc.txt 00:13:55.318 14:54:28 -- target/connect_stress.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:13:55.318 14:54:28 -- target/connect_stress.sh@43 -- # nvmftestfini 00:13:55.318 14:54:28 -- nvmf/common.sh@476 -- # nvmfcleanup 00:13:55.318 14:54:28 -- nvmf/common.sh@116 -- # sync 00:13:55.318 14:54:28 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:13:55.318 14:54:28 -- nvmf/common.sh@119 -- # set +e 00:13:55.318 14:54:28 -- nvmf/common.sh@120 -- # for i in {1..20} 00:13:55.318 14:54:28 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:13:55.318 rmmod nvme_tcp 00:13:55.318 rmmod nvme_fabrics 00:13:55.318 rmmod nvme_keyring 00:13:55.318 14:54:28 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:13:55.318 14:54:28 -- nvmf/common.sh@123 -- # set -e 00:13:55.318 14:54:28 -- nvmf/common.sh@124 -- # return 0 00:13:55.318 14:54:28 -- nvmf/common.sh@477 -- # '[' -n 81861 ']' 00:13:55.318 14:54:28 -- nvmf/common.sh@478 -- # killprocess 81861 00:13:55.318 14:54:28 -- common/autotest_common.sh@936 -- # '[' -z 81861 ']' 00:13:55.318 14:54:28 -- common/autotest_common.sh@940 -- # kill -0 81861 00:13:55.318 14:54:28 -- common/autotest_common.sh@941 -- # uname 00:13:55.318 14:54:28 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:13:55.318 14:54:28 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 81861 00:13:55.318 14:54:28 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:13:55.318 14:54:28 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:13:55.318 killing process with pid 81861 00:13:55.318 14:54:28 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 81861' 00:13:55.318 14:54:28 -- common/autotest_common.sh@955 -- # kill 81861 00:13:55.318 14:54:28 -- common/autotest_common.sh@960 -- # wait 81861 00:13:55.577 14:54:28 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:13:55.577 14:54:28 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:13:55.577 14:54:28 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:13:55.577 14:54:28 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:13:55.577 14:54:28 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:13:55.577 14:54:28 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:55.577 14:54:28 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:55.577 14:54:28 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:55.577 14:54:28 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:13:55.577 00:13:55.577 real 0m12.361s 00:13:55.577 user 0m41.486s 00:13:55.577 sys 0m2.988s 00:13:55.577 14:54:28 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:13:55.577 14:54:28 -- common/autotest_common.sh@10 -- # set +x 00:13:55.577 ************************************ 00:13:55.577 END TEST nvmf_connect_stress 00:13:55.577 ************************************ 00:13:55.577 14:54:28 -- nvmf/nvmf.sh@34 -- # run_test nvmf_fused_ordering /home/vagrant/spdk_repo/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:13:55.577 14:54:28 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:13:55.577 14:54:28 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:13:55.577 14:54:28 -- common/autotest_common.sh@10 -- # set +x 00:13:55.577 ************************************ 00:13:55.577 START TEST nvmf_fused_ordering 00:13:55.577 ************************************ 00:13:55.577 14:54:28 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:13:55.837 * Looking for test storage... 00:13:55.837 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:13:55.837 14:54:28 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:13:55.837 14:54:28 -- common/autotest_common.sh@1690 -- # lcov --version 00:13:55.837 14:54:28 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:13:55.837 14:54:28 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:13:55.837 14:54:28 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:13:55.837 14:54:28 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:13:55.837 14:54:28 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:13:55.837 14:54:28 -- scripts/common.sh@335 -- # IFS=.-: 00:13:55.837 14:54:28 -- scripts/common.sh@335 -- # read -ra ver1 00:13:55.837 14:54:28 -- scripts/common.sh@336 -- # IFS=.-: 00:13:55.837 14:54:28 -- scripts/common.sh@336 -- # read -ra ver2 00:13:55.837 14:54:28 -- scripts/common.sh@337 -- # local 'op=<' 00:13:55.837 14:54:28 -- scripts/common.sh@339 -- # ver1_l=2 00:13:55.837 14:54:28 -- scripts/common.sh@340 -- # ver2_l=1 00:13:55.837 14:54:28 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:13:55.837 14:54:28 -- scripts/common.sh@343 -- # case "$op" in 00:13:55.837 14:54:28 -- scripts/common.sh@344 -- # : 1 00:13:55.837 14:54:28 -- scripts/common.sh@363 -- # (( v = 0 )) 00:13:55.837 14:54:28 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:55.837 14:54:28 -- scripts/common.sh@364 -- # decimal 1 00:13:55.837 14:54:28 -- scripts/common.sh@352 -- # local d=1 00:13:55.837 14:54:28 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:55.837 14:54:28 -- scripts/common.sh@354 -- # echo 1 00:13:55.837 14:54:28 -- scripts/common.sh@364 -- # ver1[v]=1 00:13:55.837 14:54:28 -- scripts/common.sh@365 -- # decimal 2 00:13:55.837 14:54:28 -- scripts/common.sh@352 -- # local d=2 00:13:55.837 14:54:28 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:55.837 14:54:28 -- scripts/common.sh@354 -- # echo 2 00:13:55.837 14:54:28 -- scripts/common.sh@365 -- # ver2[v]=2 00:13:55.837 14:54:28 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:13:55.837 14:54:28 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:13:55.837 14:54:28 -- scripts/common.sh@367 -- # return 0 00:13:55.837 14:54:28 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:55.837 14:54:28 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:13:55.837 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:55.837 --rc genhtml_branch_coverage=1 00:13:55.837 --rc genhtml_function_coverage=1 00:13:55.837 --rc genhtml_legend=1 00:13:55.837 --rc geninfo_all_blocks=1 00:13:55.837 --rc geninfo_unexecuted_blocks=1 00:13:55.837 00:13:55.837 ' 00:13:55.837 14:54:28 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:13:55.837 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:55.837 --rc genhtml_branch_coverage=1 00:13:55.837 --rc genhtml_function_coverage=1 00:13:55.837 --rc genhtml_legend=1 00:13:55.837 --rc geninfo_all_blocks=1 00:13:55.837 --rc geninfo_unexecuted_blocks=1 00:13:55.837 00:13:55.837 ' 00:13:55.837 14:54:28 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:13:55.837 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:55.837 --rc genhtml_branch_coverage=1 00:13:55.837 --rc genhtml_function_coverage=1 00:13:55.837 --rc genhtml_legend=1 00:13:55.837 --rc geninfo_all_blocks=1 00:13:55.837 --rc geninfo_unexecuted_blocks=1 00:13:55.837 00:13:55.837 ' 00:13:55.837 14:54:28 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:13:55.837 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:55.837 --rc genhtml_branch_coverage=1 00:13:55.837 --rc genhtml_function_coverage=1 00:13:55.837 --rc genhtml_legend=1 00:13:55.837 --rc geninfo_all_blocks=1 00:13:55.837 --rc geninfo_unexecuted_blocks=1 00:13:55.837 00:13:55.837 ' 00:13:55.837 14:54:28 -- target/fused_ordering.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:13:55.837 14:54:28 -- nvmf/common.sh@7 -- # uname -s 00:13:55.837 14:54:28 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:55.837 14:54:28 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:55.837 14:54:28 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:55.837 14:54:28 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:55.837 14:54:28 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:55.837 14:54:28 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:55.837 14:54:28 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:55.837 14:54:28 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:55.837 14:54:28 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:55.837 14:54:28 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:55.837 14:54:28 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:2d843004-a791-47f3-8dd7-3d04462c368b 00:13:55.837 14:54:28 -- nvmf/common.sh@18 -- # NVME_HOSTID=2d843004-a791-47f3-8dd7-3d04462c368b 00:13:55.837 14:54:28 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:55.837 14:54:28 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:55.837 14:54:28 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:13:55.837 14:54:28 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:13:55.838 14:54:28 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:55.838 14:54:28 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:55.838 14:54:28 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:55.838 14:54:28 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:55.838 14:54:28 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:55.838 14:54:28 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:55.838 14:54:28 -- paths/export.sh@5 -- # export PATH 00:13:55.838 14:54:28 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:55.838 14:54:28 -- nvmf/common.sh@46 -- # : 0 00:13:55.838 14:54:28 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:13:55.838 14:54:28 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:13:55.838 14:54:28 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:13:55.838 14:54:28 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:55.838 14:54:28 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:55.838 14:54:28 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:13:55.838 14:54:28 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:13:55.838 14:54:28 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:13:55.838 14:54:28 -- target/fused_ordering.sh@12 -- # nvmftestinit 00:13:55.838 14:54:28 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:13:55.838 14:54:28 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:55.838 14:54:28 -- nvmf/common.sh@436 -- # prepare_net_devs 00:13:55.838 14:54:28 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:13:55.838 14:54:28 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:13:55.838 14:54:28 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:55.838 14:54:28 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:55.838 14:54:28 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:55.838 14:54:28 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:13:55.838 14:54:28 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:13:55.838 14:54:28 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:13:55.838 14:54:28 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:13:55.838 14:54:28 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:13:55.838 14:54:28 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:13:55.838 14:54:28 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:55.838 14:54:28 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:55.838 14:54:28 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:13:55.838 14:54:28 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:13:55.838 14:54:28 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:13:55.838 14:54:28 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:13:55.838 14:54:28 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:13:55.838 14:54:28 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:55.838 14:54:28 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:13:55.838 14:54:28 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:13:55.838 14:54:28 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:13:55.838 14:54:28 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:13:55.838 14:54:28 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:13:55.838 14:54:28 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:13:55.838 Cannot find device "nvmf_tgt_br" 00:13:55.838 14:54:28 -- nvmf/common.sh@154 -- # true 00:13:55.838 14:54:28 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:13:55.838 Cannot find device "nvmf_tgt_br2" 00:13:55.838 14:54:28 -- nvmf/common.sh@155 -- # true 00:13:55.838 14:54:28 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:13:55.838 14:54:28 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:13:55.838 Cannot find device "nvmf_tgt_br" 00:13:55.838 14:54:28 -- nvmf/common.sh@157 -- # true 00:13:55.838 14:54:28 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:13:55.838 Cannot find device "nvmf_tgt_br2" 00:13:55.838 14:54:28 -- nvmf/common.sh@158 -- # true 00:13:55.838 14:54:28 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:13:55.838 14:54:28 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:13:56.097 14:54:28 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:13:56.097 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:13:56.097 14:54:28 -- nvmf/common.sh@161 -- # true 00:13:56.097 14:54:28 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:13:56.097 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:13:56.097 14:54:28 -- nvmf/common.sh@162 -- # true 00:13:56.097 14:54:28 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:13:56.097 14:54:28 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:13:56.097 14:54:28 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:13:56.097 14:54:28 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:13:56.097 14:54:29 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:13:56.097 14:54:29 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:13:56.097 14:54:29 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:13:56.097 14:54:29 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:13:56.097 14:54:29 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:13:56.097 14:54:29 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:13:56.097 14:54:29 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:13:56.097 14:54:29 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:13:56.097 14:54:29 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:13:56.097 14:54:29 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:13:56.098 14:54:29 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:13:56.098 14:54:29 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:13:56.098 14:54:29 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:13:56.098 14:54:29 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:13:56.098 14:54:29 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:13:56.098 14:54:29 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:13:56.098 14:54:29 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:13:56.098 14:54:29 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:13:56.098 14:54:29 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:13:56.098 14:54:29 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:13:56.098 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:56.098 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.078 ms 00:13:56.098 00:13:56.098 --- 10.0.0.2 ping statistics --- 00:13:56.098 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:56.098 rtt min/avg/max/mdev = 0.078/0.078/0.078/0.000 ms 00:13:56.098 14:54:29 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:13:56.098 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:13:56.098 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.050 ms 00:13:56.098 00:13:56.098 --- 10.0.0.3 ping statistics --- 00:13:56.098 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:56.098 rtt min/avg/max/mdev = 0.050/0.050/0.050/0.000 ms 00:13:56.098 14:54:29 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:13:56.098 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:56.098 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.025 ms 00:13:56.098 00:13:56.098 --- 10.0.0.1 ping statistics --- 00:13:56.098 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:56.098 rtt min/avg/max/mdev = 0.025/0.025/0.025/0.000 ms 00:13:56.098 14:54:29 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:56.098 14:54:29 -- nvmf/common.sh@421 -- # return 0 00:13:56.098 14:54:29 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:13:56.098 14:54:29 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:56.098 14:54:29 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:13:56.098 14:54:29 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:13:56.098 14:54:29 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:56.098 14:54:29 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:13:56.098 14:54:29 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:13:56.098 14:54:29 -- target/fused_ordering.sh@13 -- # nvmfappstart -m 0x2 00:13:56.098 14:54:29 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:13:56.098 14:54:29 -- common/autotest_common.sh@722 -- # xtrace_disable 00:13:56.098 14:54:29 -- common/autotest_common.sh@10 -- # set +x 00:13:56.098 14:54:29 -- nvmf/common.sh@469 -- # nvmfpid=82250 00:13:56.098 14:54:29 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:13:56.098 14:54:29 -- nvmf/common.sh@470 -- # waitforlisten 82250 00:13:56.098 14:54:29 -- common/autotest_common.sh@829 -- # '[' -z 82250 ']' 00:13:56.098 14:54:29 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:56.098 14:54:29 -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:56.098 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:56.098 14:54:29 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:56.098 14:54:29 -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:56.098 14:54:29 -- common/autotest_common.sh@10 -- # set +x 00:13:56.357 [2024-12-01 14:54:29.262529] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:13:56.358 [2024-12-01 14:54:29.263066] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:56.358 [2024-12-01 14:54:29.402869] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:56.358 [2024-12-01 14:54:29.457790] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:13:56.358 [2024-12-01 14:54:29.457971] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:56.358 [2024-12-01 14:54:29.457984] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:56.358 [2024-12-01 14:54:29.457992] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:56.358 [2024-12-01 14:54:29.458022] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:13:57.294 14:54:30 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:57.294 14:54:30 -- common/autotest_common.sh@862 -- # return 0 00:13:57.294 14:54:30 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:13:57.294 14:54:30 -- common/autotest_common.sh@728 -- # xtrace_disable 00:13:57.294 14:54:30 -- common/autotest_common.sh@10 -- # set +x 00:13:57.294 14:54:30 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:57.294 14:54:30 -- target/fused_ordering.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:13:57.294 14:54:30 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:57.294 14:54:30 -- common/autotest_common.sh@10 -- # set +x 00:13:57.294 [2024-12-01 14:54:30.322076] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:57.294 14:54:30 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:57.294 14:54:30 -- target/fused_ordering.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:13:57.294 14:54:30 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:57.294 14:54:30 -- common/autotest_common.sh@10 -- # set +x 00:13:57.294 14:54:30 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:57.294 14:54:30 -- target/fused_ordering.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:57.294 14:54:30 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:57.294 14:54:30 -- common/autotest_common.sh@10 -- # set +x 00:13:57.294 [2024-12-01 14:54:30.338195] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:57.294 14:54:30 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:57.294 14:54:30 -- target/fused_ordering.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:13:57.294 14:54:30 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:57.294 14:54:30 -- common/autotest_common.sh@10 -- # set +x 00:13:57.294 NULL1 00:13:57.294 14:54:30 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:57.294 14:54:30 -- target/fused_ordering.sh@19 -- # rpc_cmd bdev_wait_for_examine 00:13:57.294 14:54:30 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:57.294 14:54:30 -- common/autotest_common.sh@10 -- # set +x 00:13:57.294 14:54:30 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:57.294 14:54:30 -- target/fused_ordering.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:13:57.294 14:54:30 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:57.294 14:54:30 -- common/autotest_common.sh@10 -- # set +x 00:13:57.294 14:54:30 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:57.294 14:54:30 -- target/fused_ordering.sh@22 -- # /home/vagrant/spdk_repo/spdk/test/nvme/fused_ordering/fused_ordering -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:13:57.294 [2024-12-01 14:54:30.389790] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:13:57.294 [2024-12-01 14:54:30.389845] [ DPDK EAL parameters: fused_ordering --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82306 ] 00:13:57.862 Attached to nqn.2016-06.io.spdk:cnode1 00:13:57.862 Namespace ID: 1 size: 1GB 00:13:57.862 fused_ordering(0) 00:13:57.862 fused_ordering(1) 00:13:57.862 fused_ordering(2) 00:13:57.862 fused_ordering(3) 00:13:57.862 fused_ordering(4) 00:13:57.862 fused_ordering(5) 00:13:57.862 fused_ordering(6) 00:13:57.862 fused_ordering(7) 00:13:57.862 fused_ordering(8) 00:13:57.862 fused_ordering(9) 00:13:57.862 fused_ordering(10) 00:13:57.862 fused_ordering(11) 00:13:57.862 fused_ordering(12) 00:13:57.862 fused_ordering(13) 00:13:57.862 fused_ordering(14) 00:13:57.862 fused_ordering(15) 00:13:57.862 fused_ordering(16) 00:13:57.862 fused_ordering(17) 00:13:57.862 fused_ordering(18) 00:13:57.862 fused_ordering(19) 00:13:57.862 fused_ordering(20) 00:13:57.862 fused_ordering(21) 00:13:57.862 fused_ordering(22) 00:13:57.862 fused_ordering(23) 00:13:57.862 fused_ordering(24) 00:13:57.862 fused_ordering(25) 00:13:57.862 fused_ordering(26) 00:13:57.862 fused_ordering(27) 00:13:57.862 fused_ordering(28) 00:13:57.862 fused_ordering(29) 00:13:57.862 fused_ordering(30) 00:13:57.862 fused_ordering(31) 00:13:57.862 fused_ordering(32) 00:13:57.862 fused_ordering(33) 00:13:57.862 fused_ordering(34) 00:13:57.862 fused_ordering(35) 00:13:57.862 fused_ordering(36) 00:13:57.862 fused_ordering(37) 00:13:57.862 fused_ordering(38) 00:13:57.862 fused_ordering(39) 00:13:57.862 fused_ordering(40) 00:13:57.862 fused_ordering(41) 00:13:57.862 fused_ordering(42) 00:13:57.862 fused_ordering(43) 00:13:57.862 fused_ordering(44) 00:13:57.862 fused_ordering(45) 00:13:57.862 fused_ordering(46) 00:13:57.862 fused_ordering(47) 00:13:57.862 fused_ordering(48) 00:13:57.862 fused_ordering(49) 00:13:57.862 fused_ordering(50) 00:13:57.862 fused_ordering(51) 00:13:57.862 fused_ordering(52) 00:13:57.862 fused_ordering(53) 00:13:57.862 fused_ordering(54) 00:13:57.862 fused_ordering(55) 00:13:57.862 fused_ordering(56) 00:13:57.862 fused_ordering(57) 00:13:57.862 fused_ordering(58) 00:13:57.862 fused_ordering(59) 00:13:57.862 fused_ordering(60) 00:13:57.862 fused_ordering(61) 00:13:57.862 fused_ordering(62) 00:13:57.862 fused_ordering(63) 00:13:57.862 fused_ordering(64) 00:13:57.862 fused_ordering(65) 00:13:57.862 fused_ordering(66) 00:13:57.862 fused_ordering(67) 00:13:57.862 fused_ordering(68) 00:13:57.862 fused_ordering(69) 00:13:57.862 fused_ordering(70) 00:13:57.862 fused_ordering(71) 00:13:57.862 fused_ordering(72) 00:13:57.862 fused_ordering(73) 00:13:57.862 fused_ordering(74) 00:13:57.862 fused_ordering(75) 00:13:57.862 fused_ordering(76) 00:13:57.862 fused_ordering(77) 00:13:57.862 fused_ordering(78) 00:13:57.862 fused_ordering(79) 00:13:57.862 fused_ordering(80) 00:13:57.862 fused_ordering(81) 00:13:57.862 fused_ordering(82) 00:13:57.862 fused_ordering(83) 00:13:57.862 fused_ordering(84) 00:13:57.862 fused_ordering(85) 00:13:57.862 fused_ordering(86) 00:13:57.862 fused_ordering(87) 00:13:57.862 fused_ordering(88) 00:13:57.862 fused_ordering(89) 00:13:57.862 fused_ordering(90) 00:13:57.862 fused_ordering(91) 00:13:57.862 fused_ordering(92) 00:13:57.862 fused_ordering(93) 00:13:57.862 fused_ordering(94) 00:13:57.862 fused_ordering(95) 00:13:57.862 fused_ordering(96) 00:13:57.862 fused_ordering(97) 00:13:57.862 fused_ordering(98) 00:13:57.862 fused_ordering(99) 00:13:57.862 fused_ordering(100) 00:13:57.862 fused_ordering(101) 00:13:57.862 fused_ordering(102) 00:13:57.862 fused_ordering(103) 00:13:57.862 fused_ordering(104) 00:13:57.862 fused_ordering(105) 00:13:57.862 fused_ordering(106) 00:13:57.862 fused_ordering(107) 00:13:57.862 fused_ordering(108) 00:13:57.862 fused_ordering(109) 00:13:57.862 fused_ordering(110) 00:13:57.862 fused_ordering(111) 00:13:57.862 fused_ordering(112) 00:13:57.862 fused_ordering(113) 00:13:57.862 fused_ordering(114) 00:13:57.862 fused_ordering(115) 00:13:57.862 fused_ordering(116) 00:13:57.862 fused_ordering(117) 00:13:57.862 fused_ordering(118) 00:13:57.862 fused_ordering(119) 00:13:57.862 fused_ordering(120) 00:13:57.862 fused_ordering(121) 00:13:57.862 fused_ordering(122) 00:13:57.862 fused_ordering(123) 00:13:57.862 fused_ordering(124) 00:13:57.862 fused_ordering(125) 00:13:57.862 fused_ordering(126) 00:13:57.862 fused_ordering(127) 00:13:57.862 fused_ordering(128) 00:13:57.862 fused_ordering(129) 00:13:57.862 fused_ordering(130) 00:13:57.862 fused_ordering(131) 00:13:57.862 fused_ordering(132) 00:13:57.862 fused_ordering(133) 00:13:57.862 fused_ordering(134) 00:13:57.862 fused_ordering(135) 00:13:57.862 fused_ordering(136) 00:13:57.862 fused_ordering(137) 00:13:57.862 fused_ordering(138) 00:13:57.862 fused_ordering(139) 00:13:57.862 fused_ordering(140) 00:13:57.862 fused_ordering(141) 00:13:57.862 fused_ordering(142) 00:13:57.862 fused_ordering(143) 00:13:57.862 fused_ordering(144) 00:13:57.862 fused_ordering(145) 00:13:57.862 fused_ordering(146) 00:13:57.862 fused_ordering(147) 00:13:57.862 fused_ordering(148) 00:13:57.862 fused_ordering(149) 00:13:57.862 fused_ordering(150) 00:13:57.862 fused_ordering(151) 00:13:57.862 fused_ordering(152) 00:13:57.862 fused_ordering(153) 00:13:57.862 fused_ordering(154) 00:13:57.862 fused_ordering(155) 00:13:57.862 fused_ordering(156) 00:13:57.862 fused_ordering(157) 00:13:57.862 fused_ordering(158) 00:13:57.862 fused_ordering(159) 00:13:57.862 fused_ordering(160) 00:13:57.862 fused_ordering(161) 00:13:57.862 fused_ordering(162) 00:13:57.862 fused_ordering(163) 00:13:57.862 fused_ordering(164) 00:13:57.862 fused_ordering(165) 00:13:57.862 fused_ordering(166) 00:13:57.862 fused_ordering(167) 00:13:57.862 fused_ordering(168) 00:13:57.862 fused_ordering(169) 00:13:57.862 fused_ordering(170) 00:13:57.862 fused_ordering(171) 00:13:57.862 fused_ordering(172) 00:13:57.862 fused_ordering(173) 00:13:57.862 fused_ordering(174) 00:13:57.862 fused_ordering(175) 00:13:57.862 fused_ordering(176) 00:13:57.862 fused_ordering(177) 00:13:57.862 fused_ordering(178) 00:13:57.862 fused_ordering(179) 00:13:57.862 fused_ordering(180) 00:13:57.862 fused_ordering(181) 00:13:57.862 fused_ordering(182) 00:13:57.862 fused_ordering(183) 00:13:57.862 fused_ordering(184) 00:13:57.862 fused_ordering(185) 00:13:57.862 fused_ordering(186) 00:13:57.862 fused_ordering(187) 00:13:57.862 fused_ordering(188) 00:13:57.862 fused_ordering(189) 00:13:57.862 fused_ordering(190) 00:13:57.862 fused_ordering(191) 00:13:57.862 fused_ordering(192) 00:13:57.862 fused_ordering(193) 00:13:57.862 fused_ordering(194) 00:13:57.862 fused_ordering(195) 00:13:57.862 fused_ordering(196) 00:13:57.862 fused_ordering(197) 00:13:57.862 fused_ordering(198) 00:13:57.862 fused_ordering(199) 00:13:57.862 fused_ordering(200) 00:13:57.863 fused_ordering(201) 00:13:57.863 fused_ordering(202) 00:13:57.863 fused_ordering(203) 00:13:57.863 fused_ordering(204) 00:13:57.863 fused_ordering(205) 00:13:58.121 fused_ordering(206) 00:13:58.121 fused_ordering(207) 00:13:58.121 fused_ordering(208) 00:13:58.121 fused_ordering(209) 00:13:58.121 fused_ordering(210) 00:13:58.121 fused_ordering(211) 00:13:58.121 fused_ordering(212) 00:13:58.121 fused_ordering(213) 00:13:58.121 fused_ordering(214) 00:13:58.121 fused_ordering(215) 00:13:58.121 fused_ordering(216) 00:13:58.121 fused_ordering(217) 00:13:58.121 fused_ordering(218) 00:13:58.121 fused_ordering(219) 00:13:58.121 fused_ordering(220) 00:13:58.121 fused_ordering(221) 00:13:58.121 fused_ordering(222) 00:13:58.121 fused_ordering(223) 00:13:58.121 fused_ordering(224) 00:13:58.121 fused_ordering(225) 00:13:58.121 fused_ordering(226) 00:13:58.121 fused_ordering(227) 00:13:58.121 fused_ordering(228) 00:13:58.121 fused_ordering(229) 00:13:58.121 fused_ordering(230) 00:13:58.121 fused_ordering(231) 00:13:58.121 fused_ordering(232) 00:13:58.121 fused_ordering(233) 00:13:58.121 fused_ordering(234) 00:13:58.121 fused_ordering(235) 00:13:58.122 fused_ordering(236) 00:13:58.122 fused_ordering(237) 00:13:58.122 fused_ordering(238) 00:13:58.122 fused_ordering(239) 00:13:58.122 fused_ordering(240) 00:13:58.122 fused_ordering(241) 00:13:58.122 fused_ordering(242) 00:13:58.122 fused_ordering(243) 00:13:58.122 fused_ordering(244) 00:13:58.122 fused_ordering(245) 00:13:58.122 fused_ordering(246) 00:13:58.122 fused_ordering(247) 00:13:58.122 fused_ordering(248) 00:13:58.122 fused_ordering(249) 00:13:58.122 fused_ordering(250) 00:13:58.122 fused_ordering(251) 00:13:58.122 fused_ordering(252) 00:13:58.122 fused_ordering(253) 00:13:58.122 fused_ordering(254) 00:13:58.122 fused_ordering(255) 00:13:58.122 fused_ordering(256) 00:13:58.122 fused_ordering(257) 00:13:58.122 fused_ordering(258) 00:13:58.122 fused_ordering(259) 00:13:58.122 fused_ordering(260) 00:13:58.122 fused_ordering(261) 00:13:58.122 fused_ordering(262) 00:13:58.122 fused_ordering(263) 00:13:58.122 fused_ordering(264) 00:13:58.122 fused_ordering(265) 00:13:58.122 fused_ordering(266) 00:13:58.122 fused_ordering(267) 00:13:58.122 fused_ordering(268) 00:13:58.122 fused_ordering(269) 00:13:58.122 fused_ordering(270) 00:13:58.122 fused_ordering(271) 00:13:58.122 fused_ordering(272) 00:13:58.122 fused_ordering(273) 00:13:58.122 fused_ordering(274) 00:13:58.122 fused_ordering(275) 00:13:58.122 fused_ordering(276) 00:13:58.122 fused_ordering(277) 00:13:58.122 fused_ordering(278) 00:13:58.122 fused_ordering(279) 00:13:58.122 fused_ordering(280) 00:13:58.122 fused_ordering(281) 00:13:58.122 fused_ordering(282) 00:13:58.122 fused_ordering(283) 00:13:58.122 fused_ordering(284) 00:13:58.122 fused_ordering(285) 00:13:58.122 fused_ordering(286) 00:13:58.122 fused_ordering(287) 00:13:58.122 fused_ordering(288) 00:13:58.122 fused_ordering(289) 00:13:58.122 fused_ordering(290) 00:13:58.122 fused_ordering(291) 00:13:58.122 fused_ordering(292) 00:13:58.122 fused_ordering(293) 00:13:58.122 fused_ordering(294) 00:13:58.122 fused_ordering(295) 00:13:58.122 fused_ordering(296) 00:13:58.122 fused_ordering(297) 00:13:58.122 fused_ordering(298) 00:13:58.122 fused_ordering(299) 00:13:58.122 fused_ordering(300) 00:13:58.122 fused_ordering(301) 00:13:58.122 fused_ordering(302) 00:13:58.122 fused_ordering(303) 00:13:58.122 fused_ordering(304) 00:13:58.122 fused_ordering(305) 00:13:58.122 fused_ordering(306) 00:13:58.122 fused_ordering(307) 00:13:58.122 fused_ordering(308) 00:13:58.122 fused_ordering(309) 00:13:58.122 fused_ordering(310) 00:13:58.122 fused_ordering(311) 00:13:58.122 fused_ordering(312) 00:13:58.122 fused_ordering(313) 00:13:58.122 fused_ordering(314) 00:13:58.122 fused_ordering(315) 00:13:58.122 fused_ordering(316) 00:13:58.122 fused_ordering(317) 00:13:58.122 fused_ordering(318) 00:13:58.122 fused_ordering(319) 00:13:58.122 fused_ordering(320) 00:13:58.122 fused_ordering(321) 00:13:58.122 fused_ordering(322) 00:13:58.122 fused_ordering(323) 00:13:58.122 fused_ordering(324) 00:13:58.122 fused_ordering(325) 00:13:58.122 fused_ordering(326) 00:13:58.122 fused_ordering(327) 00:13:58.122 fused_ordering(328) 00:13:58.122 fused_ordering(329) 00:13:58.122 fused_ordering(330) 00:13:58.122 fused_ordering(331) 00:13:58.122 fused_ordering(332) 00:13:58.122 fused_ordering(333) 00:13:58.122 fused_ordering(334) 00:13:58.122 fused_ordering(335) 00:13:58.122 fused_ordering(336) 00:13:58.122 fused_ordering(337) 00:13:58.122 fused_ordering(338) 00:13:58.122 fused_ordering(339) 00:13:58.122 fused_ordering(340) 00:13:58.122 fused_ordering(341) 00:13:58.122 fused_ordering(342) 00:13:58.122 fused_ordering(343) 00:13:58.122 fused_ordering(344) 00:13:58.122 fused_ordering(345) 00:13:58.122 fused_ordering(346) 00:13:58.122 fused_ordering(347) 00:13:58.122 fused_ordering(348) 00:13:58.122 fused_ordering(349) 00:13:58.122 fused_ordering(350) 00:13:58.122 fused_ordering(351) 00:13:58.122 fused_ordering(352) 00:13:58.122 fused_ordering(353) 00:13:58.122 fused_ordering(354) 00:13:58.122 fused_ordering(355) 00:13:58.122 fused_ordering(356) 00:13:58.122 fused_ordering(357) 00:13:58.122 fused_ordering(358) 00:13:58.122 fused_ordering(359) 00:13:58.122 fused_ordering(360) 00:13:58.122 fused_ordering(361) 00:13:58.122 fused_ordering(362) 00:13:58.122 fused_ordering(363) 00:13:58.122 fused_ordering(364) 00:13:58.122 fused_ordering(365) 00:13:58.122 fused_ordering(366) 00:13:58.122 fused_ordering(367) 00:13:58.122 fused_ordering(368) 00:13:58.122 fused_ordering(369) 00:13:58.122 fused_ordering(370) 00:13:58.122 fused_ordering(371) 00:13:58.122 fused_ordering(372) 00:13:58.122 fused_ordering(373) 00:13:58.122 fused_ordering(374) 00:13:58.122 fused_ordering(375) 00:13:58.122 fused_ordering(376) 00:13:58.122 fused_ordering(377) 00:13:58.122 fused_ordering(378) 00:13:58.122 fused_ordering(379) 00:13:58.122 fused_ordering(380) 00:13:58.122 fused_ordering(381) 00:13:58.122 fused_ordering(382) 00:13:58.122 fused_ordering(383) 00:13:58.122 fused_ordering(384) 00:13:58.122 fused_ordering(385) 00:13:58.122 fused_ordering(386) 00:13:58.122 fused_ordering(387) 00:13:58.122 fused_ordering(388) 00:13:58.122 fused_ordering(389) 00:13:58.122 fused_ordering(390) 00:13:58.122 fused_ordering(391) 00:13:58.122 fused_ordering(392) 00:13:58.122 fused_ordering(393) 00:13:58.122 fused_ordering(394) 00:13:58.122 fused_ordering(395) 00:13:58.122 fused_ordering(396) 00:13:58.122 fused_ordering(397) 00:13:58.122 fused_ordering(398) 00:13:58.122 fused_ordering(399) 00:13:58.122 fused_ordering(400) 00:13:58.122 fused_ordering(401) 00:13:58.122 fused_ordering(402) 00:13:58.122 fused_ordering(403) 00:13:58.122 fused_ordering(404) 00:13:58.122 fused_ordering(405) 00:13:58.122 fused_ordering(406) 00:13:58.122 fused_ordering(407) 00:13:58.122 fused_ordering(408) 00:13:58.122 fused_ordering(409) 00:13:58.122 fused_ordering(410) 00:13:58.381 fused_ordering(411) 00:13:58.381 fused_ordering(412) 00:13:58.381 fused_ordering(413) 00:13:58.381 fused_ordering(414) 00:13:58.381 fused_ordering(415) 00:13:58.381 fused_ordering(416) 00:13:58.381 fused_ordering(417) 00:13:58.381 fused_ordering(418) 00:13:58.381 fused_ordering(419) 00:13:58.381 fused_ordering(420) 00:13:58.381 fused_ordering(421) 00:13:58.381 fused_ordering(422) 00:13:58.381 fused_ordering(423) 00:13:58.381 fused_ordering(424) 00:13:58.381 fused_ordering(425) 00:13:58.381 fused_ordering(426) 00:13:58.381 fused_ordering(427) 00:13:58.381 fused_ordering(428) 00:13:58.381 fused_ordering(429) 00:13:58.381 fused_ordering(430) 00:13:58.381 fused_ordering(431) 00:13:58.381 fused_ordering(432) 00:13:58.381 fused_ordering(433) 00:13:58.381 fused_ordering(434) 00:13:58.381 fused_ordering(435) 00:13:58.381 fused_ordering(436) 00:13:58.381 fused_ordering(437) 00:13:58.381 fused_ordering(438) 00:13:58.381 fused_ordering(439) 00:13:58.381 fused_ordering(440) 00:13:58.381 fused_ordering(441) 00:13:58.381 fused_ordering(442) 00:13:58.381 fused_ordering(443) 00:13:58.381 fused_ordering(444) 00:13:58.381 fused_ordering(445) 00:13:58.381 fused_ordering(446) 00:13:58.381 fused_ordering(447) 00:13:58.381 fused_ordering(448) 00:13:58.381 fused_ordering(449) 00:13:58.381 fused_ordering(450) 00:13:58.381 fused_ordering(451) 00:13:58.381 fused_ordering(452) 00:13:58.381 fused_ordering(453) 00:13:58.381 fused_ordering(454) 00:13:58.381 fused_ordering(455) 00:13:58.381 fused_ordering(456) 00:13:58.381 fused_ordering(457) 00:13:58.381 fused_ordering(458) 00:13:58.381 fused_ordering(459) 00:13:58.381 fused_ordering(460) 00:13:58.381 fused_ordering(461) 00:13:58.381 fused_ordering(462) 00:13:58.381 fused_ordering(463) 00:13:58.381 fused_ordering(464) 00:13:58.381 fused_ordering(465) 00:13:58.381 fused_ordering(466) 00:13:58.381 fused_ordering(467) 00:13:58.381 fused_ordering(468) 00:13:58.381 fused_ordering(469) 00:13:58.381 fused_ordering(470) 00:13:58.381 fused_ordering(471) 00:13:58.381 fused_ordering(472) 00:13:58.381 fused_ordering(473) 00:13:58.381 fused_ordering(474) 00:13:58.381 fused_ordering(475) 00:13:58.381 fused_ordering(476) 00:13:58.381 fused_ordering(477) 00:13:58.381 fused_ordering(478) 00:13:58.381 fused_ordering(479) 00:13:58.381 fused_ordering(480) 00:13:58.381 fused_ordering(481) 00:13:58.381 fused_ordering(482) 00:13:58.381 fused_ordering(483) 00:13:58.381 fused_ordering(484) 00:13:58.381 fused_ordering(485) 00:13:58.381 fused_ordering(486) 00:13:58.381 fused_ordering(487) 00:13:58.381 fused_ordering(488) 00:13:58.381 fused_ordering(489) 00:13:58.381 fused_ordering(490) 00:13:58.381 fused_ordering(491) 00:13:58.381 fused_ordering(492) 00:13:58.381 fused_ordering(493) 00:13:58.381 fused_ordering(494) 00:13:58.381 fused_ordering(495) 00:13:58.381 fused_ordering(496) 00:13:58.381 fused_ordering(497) 00:13:58.381 fused_ordering(498) 00:13:58.381 fused_ordering(499) 00:13:58.381 fused_ordering(500) 00:13:58.381 fused_ordering(501) 00:13:58.381 fused_ordering(502) 00:13:58.381 fused_ordering(503) 00:13:58.381 fused_ordering(504) 00:13:58.381 fused_ordering(505) 00:13:58.381 fused_ordering(506) 00:13:58.381 fused_ordering(507) 00:13:58.381 fused_ordering(508) 00:13:58.381 fused_ordering(509) 00:13:58.381 fused_ordering(510) 00:13:58.381 fused_ordering(511) 00:13:58.381 fused_ordering(512) 00:13:58.381 fused_ordering(513) 00:13:58.381 fused_ordering(514) 00:13:58.381 fused_ordering(515) 00:13:58.381 fused_ordering(516) 00:13:58.381 fused_ordering(517) 00:13:58.381 fused_ordering(518) 00:13:58.381 fused_ordering(519) 00:13:58.381 fused_ordering(520) 00:13:58.381 fused_ordering(521) 00:13:58.381 fused_ordering(522) 00:13:58.381 fused_ordering(523) 00:13:58.381 fused_ordering(524) 00:13:58.381 fused_ordering(525) 00:13:58.381 fused_ordering(526) 00:13:58.381 fused_ordering(527) 00:13:58.381 fused_ordering(528) 00:13:58.381 fused_ordering(529) 00:13:58.381 fused_ordering(530) 00:13:58.381 fused_ordering(531) 00:13:58.381 fused_ordering(532) 00:13:58.381 fused_ordering(533) 00:13:58.381 fused_ordering(534) 00:13:58.381 fused_ordering(535) 00:13:58.381 fused_ordering(536) 00:13:58.381 fused_ordering(537) 00:13:58.381 fused_ordering(538) 00:13:58.381 fused_ordering(539) 00:13:58.381 fused_ordering(540) 00:13:58.381 fused_ordering(541) 00:13:58.381 fused_ordering(542) 00:13:58.381 fused_ordering(543) 00:13:58.381 fused_ordering(544) 00:13:58.381 fused_ordering(545) 00:13:58.381 fused_ordering(546) 00:13:58.381 fused_ordering(547) 00:13:58.381 fused_ordering(548) 00:13:58.381 fused_ordering(549) 00:13:58.381 fused_ordering(550) 00:13:58.381 fused_ordering(551) 00:13:58.381 fused_ordering(552) 00:13:58.381 fused_ordering(553) 00:13:58.381 fused_ordering(554) 00:13:58.381 fused_ordering(555) 00:13:58.381 fused_ordering(556) 00:13:58.381 fused_ordering(557) 00:13:58.381 fused_ordering(558) 00:13:58.381 fused_ordering(559) 00:13:58.381 fused_ordering(560) 00:13:58.381 fused_ordering(561) 00:13:58.381 fused_ordering(562) 00:13:58.381 fused_ordering(563) 00:13:58.381 fused_ordering(564) 00:13:58.381 fused_ordering(565) 00:13:58.381 fused_ordering(566) 00:13:58.381 fused_ordering(567) 00:13:58.381 fused_ordering(568) 00:13:58.381 fused_ordering(569) 00:13:58.381 fused_ordering(570) 00:13:58.381 fused_ordering(571) 00:13:58.381 fused_ordering(572) 00:13:58.381 fused_ordering(573) 00:13:58.381 fused_ordering(574) 00:13:58.381 fused_ordering(575) 00:13:58.381 fused_ordering(576) 00:13:58.381 fused_ordering(577) 00:13:58.381 fused_ordering(578) 00:13:58.381 fused_ordering(579) 00:13:58.381 fused_ordering(580) 00:13:58.381 fused_ordering(581) 00:13:58.381 fused_ordering(582) 00:13:58.381 fused_ordering(583) 00:13:58.381 fused_ordering(584) 00:13:58.381 fused_ordering(585) 00:13:58.381 fused_ordering(586) 00:13:58.381 fused_ordering(587) 00:13:58.381 fused_ordering(588) 00:13:58.381 fused_ordering(589) 00:13:58.381 fused_ordering(590) 00:13:58.381 fused_ordering(591) 00:13:58.381 fused_ordering(592) 00:13:58.381 fused_ordering(593) 00:13:58.381 fused_ordering(594) 00:13:58.381 fused_ordering(595) 00:13:58.381 fused_ordering(596) 00:13:58.381 fused_ordering(597) 00:13:58.381 fused_ordering(598) 00:13:58.381 fused_ordering(599) 00:13:58.381 fused_ordering(600) 00:13:58.381 fused_ordering(601) 00:13:58.381 fused_ordering(602) 00:13:58.381 fused_ordering(603) 00:13:58.381 fused_ordering(604) 00:13:58.381 fused_ordering(605) 00:13:58.381 fused_ordering(606) 00:13:58.381 fused_ordering(607) 00:13:58.381 fused_ordering(608) 00:13:58.381 fused_ordering(609) 00:13:58.381 fused_ordering(610) 00:13:58.381 fused_ordering(611) 00:13:58.381 fused_ordering(612) 00:13:58.381 fused_ordering(613) 00:13:58.381 fused_ordering(614) 00:13:58.382 fused_ordering(615) 00:13:58.640 fused_ordering(616) 00:13:58.640 fused_ordering(617) 00:13:58.640 fused_ordering(618) 00:13:58.640 fused_ordering(619) 00:13:58.640 fused_ordering(620) 00:13:58.640 fused_ordering(621) 00:13:58.640 fused_ordering(622) 00:13:58.640 fused_ordering(623) 00:13:58.640 fused_ordering(624) 00:13:58.640 fused_ordering(625) 00:13:58.640 fused_ordering(626) 00:13:58.640 fused_ordering(627) 00:13:58.640 fused_ordering(628) 00:13:58.640 fused_ordering(629) 00:13:58.640 fused_ordering(630) 00:13:58.640 fused_ordering(631) 00:13:58.640 fused_ordering(632) 00:13:58.640 fused_ordering(633) 00:13:58.640 fused_ordering(634) 00:13:58.640 fused_ordering(635) 00:13:58.640 fused_ordering(636) 00:13:58.640 fused_ordering(637) 00:13:58.640 fused_ordering(638) 00:13:58.640 fused_ordering(639) 00:13:58.640 fused_ordering(640) 00:13:58.640 fused_ordering(641) 00:13:58.640 fused_ordering(642) 00:13:58.640 fused_ordering(643) 00:13:58.640 fused_ordering(644) 00:13:58.640 fused_ordering(645) 00:13:58.640 fused_ordering(646) 00:13:58.640 fused_ordering(647) 00:13:58.640 fused_ordering(648) 00:13:58.640 fused_ordering(649) 00:13:58.640 fused_ordering(650) 00:13:58.640 fused_ordering(651) 00:13:58.640 fused_ordering(652) 00:13:58.640 fused_ordering(653) 00:13:58.640 fused_ordering(654) 00:13:58.640 fused_ordering(655) 00:13:58.640 fused_ordering(656) 00:13:58.640 fused_ordering(657) 00:13:58.640 fused_ordering(658) 00:13:58.641 fused_ordering(659) 00:13:58.641 fused_ordering(660) 00:13:58.641 fused_ordering(661) 00:13:58.641 fused_ordering(662) 00:13:58.641 fused_ordering(663) 00:13:58.641 fused_ordering(664) 00:13:58.641 fused_ordering(665) 00:13:58.641 fused_ordering(666) 00:13:58.641 fused_ordering(667) 00:13:58.641 fused_ordering(668) 00:13:58.641 fused_ordering(669) 00:13:58.641 fused_ordering(670) 00:13:58.641 fused_ordering(671) 00:13:58.641 fused_ordering(672) 00:13:58.641 fused_ordering(673) 00:13:58.641 fused_ordering(674) 00:13:58.641 fused_ordering(675) 00:13:58.641 fused_ordering(676) 00:13:58.641 fused_ordering(677) 00:13:58.641 fused_ordering(678) 00:13:58.641 fused_ordering(679) 00:13:58.641 fused_ordering(680) 00:13:58.641 fused_ordering(681) 00:13:58.641 fused_ordering(682) 00:13:58.641 fused_ordering(683) 00:13:58.641 fused_ordering(684) 00:13:58.641 fused_ordering(685) 00:13:58.641 fused_ordering(686) 00:13:58.641 fused_ordering(687) 00:13:58.641 fused_ordering(688) 00:13:58.641 fused_ordering(689) 00:13:58.641 fused_ordering(690) 00:13:58.641 fused_ordering(691) 00:13:58.641 fused_ordering(692) 00:13:58.641 fused_ordering(693) 00:13:58.641 fused_ordering(694) 00:13:58.641 fused_ordering(695) 00:13:58.641 fused_ordering(696) 00:13:58.641 fused_ordering(697) 00:13:58.641 fused_ordering(698) 00:13:58.641 fused_ordering(699) 00:13:58.641 fused_ordering(700) 00:13:58.641 fused_ordering(701) 00:13:58.641 fused_ordering(702) 00:13:58.641 fused_ordering(703) 00:13:58.641 fused_ordering(704) 00:13:58.641 fused_ordering(705) 00:13:58.641 fused_ordering(706) 00:13:58.641 fused_ordering(707) 00:13:58.641 fused_ordering(708) 00:13:58.641 fused_ordering(709) 00:13:58.641 fused_ordering(710) 00:13:58.641 fused_ordering(711) 00:13:58.641 fused_ordering(712) 00:13:58.641 fused_ordering(713) 00:13:58.641 fused_ordering(714) 00:13:58.641 fused_ordering(715) 00:13:58.641 fused_ordering(716) 00:13:58.641 fused_ordering(717) 00:13:58.641 fused_ordering(718) 00:13:58.641 fused_ordering(719) 00:13:58.641 fused_ordering(720) 00:13:58.641 fused_ordering(721) 00:13:58.641 fused_ordering(722) 00:13:58.641 fused_ordering(723) 00:13:58.641 fused_ordering(724) 00:13:58.641 fused_ordering(725) 00:13:58.641 fused_ordering(726) 00:13:58.641 fused_ordering(727) 00:13:58.641 fused_ordering(728) 00:13:58.641 fused_ordering(729) 00:13:58.641 fused_ordering(730) 00:13:58.641 fused_ordering(731) 00:13:58.641 fused_ordering(732) 00:13:58.641 fused_ordering(733) 00:13:58.641 fused_ordering(734) 00:13:58.641 fused_ordering(735) 00:13:58.641 fused_ordering(736) 00:13:58.641 fused_ordering(737) 00:13:58.641 fused_ordering(738) 00:13:58.641 fused_ordering(739) 00:13:58.641 fused_ordering(740) 00:13:58.641 fused_ordering(741) 00:13:58.641 fused_ordering(742) 00:13:58.641 fused_ordering(743) 00:13:58.641 fused_ordering(744) 00:13:58.641 fused_ordering(745) 00:13:58.641 fused_ordering(746) 00:13:58.641 fused_ordering(747) 00:13:58.641 fused_ordering(748) 00:13:58.641 fused_ordering(749) 00:13:58.641 fused_ordering(750) 00:13:58.641 fused_ordering(751) 00:13:58.641 fused_ordering(752) 00:13:58.641 fused_ordering(753) 00:13:58.641 fused_ordering(754) 00:13:58.641 fused_ordering(755) 00:13:58.641 fused_ordering(756) 00:13:58.641 fused_ordering(757) 00:13:58.641 fused_ordering(758) 00:13:58.641 fused_ordering(759) 00:13:58.641 fused_ordering(760) 00:13:58.641 fused_ordering(761) 00:13:58.641 fused_ordering(762) 00:13:58.641 fused_ordering(763) 00:13:58.641 fused_ordering(764) 00:13:58.641 fused_ordering(765) 00:13:58.641 fused_ordering(766) 00:13:58.641 fused_ordering(767) 00:13:58.641 fused_ordering(768) 00:13:58.641 fused_ordering(769) 00:13:58.641 fused_ordering(770) 00:13:58.641 fused_ordering(771) 00:13:58.641 fused_ordering(772) 00:13:58.641 fused_ordering(773) 00:13:58.641 fused_ordering(774) 00:13:58.641 fused_ordering(775) 00:13:58.641 fused_ordering(776) 00:13:58.641 fused_ordering(777) 00:13:58.641 fused_ordering(778) 00:13:58.641 fused_ordering(779) 00:13:58.641 fused_ordering(780) 00:13:58.641 fused_ordering(781) 00:13:58.641 fused_ordering(782) 00:13:58.641 fused_ordering(783) 00:13:58.641 fused_ordering(784) 00:13:58.641 fused_ordering(785) 00:13:58.641 fused_ordering(786) 00:13:58.641 fused_ordering(787) 00:13:58.641 fused_ordering(788) 00:13:58.641 fused_ordering(789) 00:13:58.641 fused_ordering(790) 00:13:58.641 fused_ordering(791) 00:13:58.641 fused_ordering(792) 00:13:58.641 fused_ordering(793) 00:13:58.641 fused_ordering(794) 00:13:58.641 fused_ordering(795) 00:13:58.641 fused_ordering(796) 00:13:58.641 fused_ordering(797) 00:13:58.641 fused_ordering(798) 00:13:58.641 fused_ordering(799) 00:13:58.641 fused_ordering(800) 00:13:58.641 fused_ordering(801) 00:13:58.641 fused_ordering(802) 00:13:58.641 fused_ordering(803) 00:13:58.641 fused_ordering(804) 00:13:58.641 fused_ordering(805) 00:13:58.641 fused_ordering(806) 00:13:58.641 fused_ordering(807) 00:13:58.641 fused_ordering(808) 00:13:58.641 fused_ordering(809) 00:13:58.641 fused_ordering(810) 00:13:58.641 fused_ordering(811) 00:13:58.641 fused_ordering(812) 00:13:58.641 fused_ordering(813) 00:13:58.641 fused_ordering(814) 00:13:58.641 fused_ordering(815) 00:13:58.641 fused_ordering(816) 00:13:58.641 fused_ordering(817) 00:13:58.641 fused_ordering(818) 00:13:58.641 fused_ordering(819) 00:13:58.641 fused_ordering(820) 00:13:59.209 fused_ordering(821) 00:13:59.209 fused_ordering(822) 00:13:59.209 fused_ordering(823) 00:13:59.209 fused_ordering(824) 00:13:59.209 fused_ordering(825) 00:13:59.209 fused_ordering(826) 00:13:59.209 fused_ordering(827) 00:13:59.209 fused_ordering(828) 00:13:59.209 fused_ordering(829) 00:13:59.209 fused_ordering(830) 00:13:59.209 fused_ordering(831) 00:13:59.209 fused_ordering(832) 00:13:59.209 fused_ordering(833) 00:13:59.209 fused_ordering(834) 00:13:59.209 fused_ordering(835) 00:13:59.209 fused_ordering(836) 00:13:59.209 fused_ordering(837) 00:13:59.209 fused_ordering(838) 00:13:59.209 fused_ordering(839) 00:13:59.209 fused_ordering(840) 00:13:59.209 fused_ordering(841) 00:13:59.209 fused_ordering(842) 00:13:59.209 fused_ordering(843) 00:13:59.209 fused_ordering(844) 00:13:59.209 fused_ordering(845) 00:13:59.209 fused_ordering(846) 00:13:59.209 fused_ordering(847) 00:13:59.209 fused_ordering(848) 00:13:59.209 fused_ordering(849) 00:13:59.209 fused_ordering(850) 00:13:59.209 fused_ordering(851) 00:13:59.209 fused_ordering(852) 00:13:59.209 fused_ordering(853) 00:13:59.209 fused_ordering(854) 00:13:59.209 fused_ordering(855) 00:13:59.209 fused_ordering(856) 00:13:59.209 fused_ordering(857) 00:13:59.209 fused_ordering(858) 00:13:59.209 fused_ordering(859) 00:13:59.209 fused_ordering(860) 00:13:59.209 fused_ordering(861) 00:13:59.209 fused_ordering(862) 00:13:59.209 fused_ordering(863) 00:13:59.209 fused_ordering(864) 00:13:59.209 fused_ordering(865) 00:13:59.209 fused_ordering(866) 00:13:59.209 fused_ordering(867) 00:13:59.209 fused_ordering(868) 00:13:59.209 fused_ordering(869) 00:13:59.209 fused_ordering(870) 00:13:59.209 fused_ordering(871) 00:13:59.209 fused_ordering(872) 00:13:59.209 fused_ordering(873) 00:13:59.209 fused_ordering(874) 00:13:59.209 fused_ordering(875) 00:13:59.209 fused_ordering(876) 00:13:59.209 fused_ordering(877) 00:13:59.209 fused_ordering(878) 00:13:59.209 fused_ordering(879) 00:13:59.209 fused_ordering(880) 00:13:59.209 fused_ordering(881) 00:13:59.209 fused_ordering(882) 00:13:59.209 fused_ordering(883) 00:13:59.209 fused_ordering(884) 00:13:59.209 fused_ordering(885) 00:13:59.209 fused_ordering(886) 00:13:59.209 fused_ordering(887) 00:13:59.209 fused_ordering(888) 00:13:59.209 fused_ordering(889) 00:13:59.209 fused_ordering(890) 00:13:59.209 fused_ordering(891) 00:13:59.209 fused_ordering(892) 00:13:59.209 fused_ordering(893) 00:13:59.209 fused_ordering(894) 00:13:59.209 fused_ordering(895) 00:13:59.209 fused_ordering(896) 00:13:59.209 fused_ordering(897) 00:13:59.209 fused_ordering(898) 00:13:59.209 fused_ordering(899) 00:13:59.210 fused_ordering(900) 00:13:59.210 fused_ordering(901) 00:13:59.210 fused_ordering(902) 00:13:59.210 fused_ordering(903) 00:13:59.210 fused_ordering(904) 00:13:59.210 fused_ordering(905) 00:13:59.210 fused_ordering(906) 00:13:59.210 fused_ordering(907) 00:13:59.210 fused_ordering(908) 00:13:59.210 fused_ordering(909) 00:13:59.210 fused_ordering(910) 00:13:59.210 fused_ordering(911) 00:13:59.210 fused_ordering(912) 00:13:59.210 fused_ordering(913) 00:13:59.210 fused_ordering(914) 00:13:59.210 fused_ordering(915) 00:13:59.210 fused_ordering(916) 00:13:59.210 fused_ordering(917) 00:13:59.210 fused_ordering(918) 00:13:59.210 fused_ordering(919) 00:13:59.210 fused_ordering(920) 00:13:59.210 fused_ordering(921) 00:13:59.210 fused_ordering(922) 00:13:59.210 fused_ordering(923) 00:13:59.210 fused_ordering(924) 00:13:59.210 fused_ordering(925) 00:13:59.210 fused_ordering(926) 00:13:59.210 fused_ordering(927) 00:13:59.210 fused_ordering(928) 00:13:59.210 fused_ordering(929) 00:13:59.210 fused_ordering(930) 00:13:59.210 fused_ordering(931) 00:13:59.210 fused_ordering(932) 00:13:59.210 fused_ordering(933) 00:13:59.210 fused_ordering(934) 00:13:59.210 fused_ordering(935) 00:13:59.210 fused_ordering(936) 00:13:59.210 fused_ordering(937) 00:13:59.210 fused_ordering(938) 00:13:59.210 fused_ordering(939) 00:13:59.210 fused_ordering(940) 00:13:59.210 fused_ordering(941) 00:13:59.210 fused_ordering(942) 00:13:59.210 fused_ordering(943) 00:13:59.210 fused_ordering(944) 00:13:59.210 fused_ordering(945) 00:13:59.210 fused_ordering(946) 00:13:59.210 fused_ordering(947) 00:13:59.210 fused_ordering(948) 00:13:59.210 fused_ordering(949) 00:13:59.210 fused_ordering(950) 00:13:59.210 fused_ordering(951) 00:13:59.210 fused_ordering(952) 00:13:59.210 fused_ordering(953) 00:13:59.210 fused_ordering(954) 00:13:59.210 fused_ordering(955) 00:13:59.210 fused_ordering(956) 00:13:59.210 fused_ordering(957) 00:13:59.210 fused_ordering(958) 00:13:59.210 fused_ordering(959) 00:13:59.210 fused_ordering(960) 00:13:59.210 fused_ordering(961) 00:13:59.210 fused_ordering(962) 00:13:59.210 fused_ordering(963) 00:13:59.210 fused_ordering(964) 00:13:59.210 fused_ordering(965) 00:13:59.210 fused_ordering(966) 00:13:59.210 fused_ordering(967) 00:13:59.210 fused_ordering(968) 00:13:59.210 fused_ordering(969) 00:13:59.210 fused_ordering(970) 00:13:59.210 fused_ordering(971) 00:13:59.210 fused_ordering(972) 00:13:59.210 fused_ordering(973) 00:13:59.210 fused_ordering(974) 00:13:59.210 fused_ordering(975) 00:13:59.210 fused_ordering(976) 00:13:59.210 fused_ordering(977) 00:13:59.210 fused_ordering(978) 00:13:59.210 fused_ordering(979) 00:13:59.210 fused_ordering(980) 00:13:59.210 fused_ordering(981) 00:13:59.210 fused_ordering(982) 00:13:59.210 fused_ordering(983) 00:13:59.210 fused_ordering(984) 00:13:59.210 fused_ordering(985) 00:13:59.210 fused_ordering(986) 00:13:59.210 fused_ordering(987) 00:13:59.210 fused_ordering(988) 00:13:59.210 fused_ordering(989) 00:13:59.210 fused_ordering(990) 00:13:59.210 fused_ordering(991) 00:13:59.210 fused_ordering(992) 00:13:59.210 fused_ordering(993) 00:13:59.210 fused_ordering(994) 00:13:59.210 fused_ordering(995) 00:13:59.210 fused_ordering(996) 00:13:59.210 fused_ordering(997) 00:13:59.210 fused_ordering(998) 00:13:59.210 fused_ordering(999) 00:13:59.210 fused_ordering(1000) 00:13:59.210 fused_ordering(1001) 00:13:59.210 fused_ordering(1002) 00:13:59.210 fused_ordering(1003) 00:13:59.210 fused_ordering(1004) 00:13:59.210 fused_ordering(1005) 00:13:59.210 fused_ordering(1006) 00:13:59.210 fused_ordering(1007) 00:13:59.210 fused_ordering(1008) 00:13:59.210 fused_ordering(1009) 00:13:59.210 fused_ordering(1010) 00:13:59.210 fused_ordering(1011) 00:13:59.210 fused_ordering(1012) 00:13:59.210 fused_ordering(1013) 00:13:59.210 fused_ordering(1014) 00:13:59.210 fused_ordering(1015) 00:13:59.210 fused_ordering(1016) 00:13:59.210 fused_ordering(1017) 00:13:59.210 fused_ordering(1018) 00:13:59.210 fused_ordering(1019) 00:13:59.210 fused_ordering(1020) 00:13:59.210 fused_ordering(1021) 00:13:59.210 fused_ordering(1022) 00:13:59.210 fused_ordering(1023) 00:13:59.210 14:54:32 -- target/fused_ordering.sh@23 -- # trap - SIGINT SIGTERM EXIT 00:13:59.210 14:54:32 -- target/fused_ordering.sh@25 -- # nvmftestfini 00:13:59.210 14:54:32 -- nvmf/common.sh@476 -- # nvmfcleanup 00:13:59.210 14:54:32 -- nvmf/common.sh@116 -- # sync 00:13:59.210 14:54:32 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:13:59.210 14:54:32 -- nvmf/common.sh@119 -- # set +e 00:13:59.210 14:54:32 -- nvmf/common.sh@120 -- # for i in {1..20} 00:13:59.210 14:54:32 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:13:59.210 rmmod nvme_tcp 00:13:59.210 rmmod nvme_fabrics 00:13:59.210 rmmod nvme_keyring 00:13:59.210 14:54:32 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:13:59.210 14:54:32 -- nvmf/common.sh@123 -- # set -e 00:13:59.210 14:54:32 -- nvmf/common.sh@124 -- # return 0 00:13:59.210 14:54:32 -- nvmf/common.sh@477 -- # '[' -n 82250 ']' 00:13:59.210 14:54:32 -- nvmf/common.sh@478 -- # killprocess 82250 00:13:59.210 14:54:32 -- common/autotest_common.sh@936 -- # '[' -z 82250 ']' 00:13:59.210 14:54:32 -- common/autotest_common.sh@940 -- # kill -0 82250 00:13:59.210 14:54:32 -- common/autotest_common.sh@941 -- # uname 00:13:59.210 14:54:32 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:13:59.210 14:54:32 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 82250 00:13:59.470 14:54:32 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:13:59.470 14:54:32 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:13:59.470 14:54:32 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 82250' 00:13:59.470 killing process with pid 82250 00:13:59.470 14:54:32 -- common/autotest_common.sh@955 -- # kill 82250 00:13:59.471 14:54:32 -- common/autotest_common.sh@960 -- # wait 82250 00:13:59.471 14:54:32 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:13:59.471 14:54:32 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:13:59.471 14:54:32 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:13:59.471 14:54:32 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:13:59.471 14:54:32 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:13:59.471 14:54:32 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:59.471 14:54:32 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:59.471 14:54:32 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:59.471 14:54:32 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:13:59.471 ************************************ 00:13:59.471 END TEST nvmf_fused_ordering 00:13:59.471 ************************************ 00:13:59.471 00:13:59.471 real 0m3.909s 00:13:59.471 user 0m4.424s 00:13:59.471 sys 0m1.434s 00:13:59.471 14:54:32 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:13:59.471 14:54:32 -- common/autotest_common.sh@10 -- # set +x 00:13:59.730 14:54:32 -- nvmf/nvmf.sh@35 -- # run_test nvmf_delete_subsystem /home/vagrant/spdk_repo/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:13:59.730 14:54:32 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:13:59.730 14:54:32 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:13:59.730 14:54:32 -- common/autotest_common.sh@10 -- # set +x 00:13:59.730 ************************************ 00:13:59.730 START TEST nvmf_delete_subsystem 00:13:59.730 ************************************ 00:13:59.730 14:54:32 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:13:59.730 * Looking for test storage... 00:13:59.730 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:13:59.730 14:54:32 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:13:59.730 14:54:32 -- common/autotest_common.sh@1690 -- # lcov --version 00:13:59.730 14:54:32 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:13:59.730 14:54:32 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:13:59.730 14:54:32 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:13:59.730 14:54:32 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:13:59.730 14:54:32 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:13:59.730 14:54:32 -- scripts/common.sh@335 -- # IFS=.-: 00:13:59.730 14:54:32 -- scripts/common.sh@335 -- # read -ra ver1 00:13:59.730 14:54:32 -- scripts/common.sh@336 -- # IFS=.-: 00:13:59.730 14:54:32 -- scripts/common.sh@336 -- # read -ra ver2 00:13:59.730 14:54:32 -- scripts/common.sh@337 -- # local 'op=<' 00:13:59.730 14:54:32 -- scripts/common.sh@339 -- # ver1_l=2 00:13:59.730 14:54:32 -- scripts/common.sh@340 -- # ver2_l=1 00:13:59.730 14:54:32 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:13:59.730 14:54:32 -- scripts/common.sh@343 -- # case "$op" in 00:13:59.730 14:54:32 -- scripts/common.sh@344 -- # : 1 00:13:59.730 14:54:32 -- scripts/common.sh@363 -- # (( v = 0 )) 00:13:59.730 14:54:32 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:59.730 14:54:32 -- scripts/common.sh@364 -- # decimal 1 00:13:59.730 14:54:32 -- scripts/common.sh@352 -- # local d=1 00:13:59.730 14:54:32 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:59.730 14:54:32 -- scripts/common.sh@354 -- # echo 1 00:13:59.730 14:54:32 -- scripts/common.sh@364 -- # ver1[v]=1 00:13:59.730 14:54:32 -- scripts/common.sh@365 -- # decimal 2 00:13:59.730 14:54:32 -- scripts/common.sh@352 -- # local d=2 00:13:59.730 14:54:32 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:59.730 14:54:32 -- scripts/common.sh@354 -- # echo 2 00:13:59.730 14:54:32 -- scripts/common.sh@365 -- # ver2[v]=2 00:13:59.730 14:54:32 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:13:59.730 14:54:32 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:13:59.730 14:54:32 -- scripts/common.sh@367 -- # return 0 00:13:59.730 14:54:32 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:59.730 14:54:32 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:13:59.730 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:59.730 --rc genhtml_branch_coverage=1 00:13:59.730 --rc genhtml_function_coverage=1 00:13:59.730 --rc genhtml_legend=1 00:13:59.730 --rc geninfo_all_blocks=1 00:13:59.730 --rc geninfo_unexecuted_blocks=1 00:13:59.730 00:13:59.730 ' 00:13:59.730 14:54:32 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:13:59.730 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:59.730 --rc genhtml_branch_coverage=1 00:13:59.730 --rc genhtml_function_coverage=1 00:13:59.730 --rc genhtml_legend=1 00:13:59.730 --rc geninfo_all_blocks=1 00:13:59.730 --rc geninfo_unexecuted_blocks=1 00:13:59.730 00:13:59.730 ' 00:13:59.730 14:54:32 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:13:59.730 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:59.730 --rc genhtml_branch_coverage=1 00:13:59.730 --rc genhtml_function_coverage=1 00:13:59.730 --rc genhtml_legend=1 00:13:59.730 --rc geninfo_all_blocks=1 00:13:59.730 --rc geninfo_unexecuted_blocks=1 00:13:59.730 00:13:59.730 ' 00:13:59.730 14:54:32 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:13:59.730 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:59.730 --rc genhtml_branch_coverage=1 00:13:59.730 --rc genhtml_function_coverage=1 00:13:59.730 --rc genhtml_legend=1 00:13:59.730 --rc geninfo_all_blocks=1 00:13:59.730 --rc geninfo_unexecuted_blocks=1 00:13:59.730 00:13:59.730 ' 00:13:59.730 14:54:32 -- target/delete_subsystem.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:13:59.730 14:54:32 -- nvmf/common.sh@7 -- # uname -s 00:13:59.730 14:54:32 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:59.730 14:54:32 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:59.730 14:54:32 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:59.730 14:54:32 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:59.730 14:54:32 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:59.730 14:54:32 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:59.730 14:54:32 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:59.730 14:54:32 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:59.730 14:54:32 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:59.730 14:54:32 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:59.730 14:54:32 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:2d843004-a791-47f3-8dd7-3d04462c368b 00:13:59.730 14:54:32 -- nvmf/common.sh@18 -- # NVME_HOSTID=2d843004-a791-47f3-8dd7-3d04462c368b 00:13:59.730 14:54:32 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:59.730 14:54:32 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:59.730 14:54:32 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:13:59.730 14:54:32 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:13:59.730 14:54:32 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:59.730 14:54:32 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:59.730 14:54:32 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:59.730 14:54:32 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:59.730 14:54:32 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:59.731 14:54:32 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:59.731 14:54:32 -- paths/export.sh@5 -- # export PATH 00:13:59.731 14:54:32 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:59.731 14:54:32 -- nvmf/common.sh@46 -- # : 0 00:13:59.731 14:54:32 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:13:59.731 14:54:32 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:13:59.731 14:54:32 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:13:59.731 14:54:32 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:59.731 14:54:32 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:59.731 14:54:32 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:13:59.731 14:54:32 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:13:59.731 14:54:32 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:13:59.731 14:54:32 -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:13:59.731 14:54:32 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:13:59.731 14:54:32 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:59.731 14:54:32 -- nvmf/common.sh@436 -- # prepare_net_devs 00:13:59.731 14:54:32 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:13:59.731 14:54:32 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:13:59.731 14:54:32 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:59.731 14:54:32 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:59.731 14:54:32 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:59.731 14:54:32 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:13:59.731 14:54:32 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:13:59.731 14:54:32 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:13:59.731 14:54:32 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:13:59.731 14:54:32 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:13:59.731 14:54:32 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:13:59.731 14:54:32 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:59.731 14:54:32 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:59.731 14:54:32 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:13:59.731 14:54:32 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:13:59.731 14:54:32 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:13:59.731 14:54:32 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:13:59.731 14:54:32 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:13:59.731 14:54:32 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:59.731 14:54:32 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:13:59.731 14:54:32 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:13:59.731 14:54:32 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:13:59.731 14:54:32 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:13:59.731 14:54:32 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:13:59.731 14:54:32 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:13:59.731 Cannot find device "nvmf_tgt_br" 00:13:59.731 14:54:32 -- nvmf/common.sh@154 -- # true 00:13:59.731 14:54:32 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:13:59.731 Cannot find device "nvmf_tgt_br2" 00:13:59.731 14:54:32 -- nvmf/common.sh@155 -- # true 00:13:59.731 14:54:32 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:13:59.731 14:54:32 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:14:00.004 Cannot find device "nvmf_tgt_br" 00:14:00.004 14:54:32 -- nvmf/common.sh@157 -- # true 00:14:00.004 14:54:32 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:14:00.004 Cannot find device "nvmf_tgt_br2" 00:14:00.004 14:54:32 -- nvmf/common.sh@158 -- # true 00:14:00.004 14:54:32 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:14:00.004 14:54:32 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:14:00.004 14:54:32 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:14:00.004 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:00.004 14:54:32 -- nvmf/common.sh@161 -- # true 00:14:00.004 14:54:32 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:14:00.004 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:00.004 14:54:32 -- nvmf/common.sh@162 -- # true 00:14:00.004 14:54:32 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:14:00.004 14:54:32 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:14:00.004 14:54:32 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:14:00.004 14:54:32 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:14:00.004 14:54:32 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:14:00.004 14:54:32 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:14:00.004 14:54:32 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:14:00.004 14:54:32 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:14:00.004 14:54:33 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:14:00.004 14:54:33 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:14:00.004 14:54:33 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:14:00.004 14:54:33 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:14:00.004 14:54:33 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:14:00.004 14:54:33 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:14:00.004 14:54:33 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:14:00.004 14:54:33 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:14:00.004 14:54:33 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:14:00.004 14:54:33 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:14:00.004 14:54:33 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:14:00.004 14:54:33 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:14:00.004 14:54:33 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:14:00.004 14:54:33 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:14:00.004 14:54:33 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:14:00.004 14:54:33 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:14:00.004 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:00.004 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.103 ms 00:14:00.004 00:14:00.004 --- 10.0.0.2 ping statistics --- 00:14:00.004 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:00.004 rtt min/avg/max/mdev = 0.103/0.103/0.103/0.000 ms 00:14:00.004 14:54:33 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:14:00.004 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:14:00.004 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.052 ms 00:14:00.004 00:14:00.004 --- 10.0.0.3 ping statistics --- 00:14:00.004 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:00.004 rtt min/avg/max/mdev = 0.052/0.052/0.052/0.000 ms 00:14:00.004 14:54:33 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:14:00.285 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:00.285 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.043 ms 00:14:00.285 00:14:00.285 --- 10.0.0.1 ping statistics --- 00:14:00.285 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:00.285 rtt min/avg/max/mdev = 0.043/0.043/0.043/0.000 ms 00:14:00.285 14:54:33 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:00.285 14:54:33 -- nvmf/common.sh@421 -- # return 0 00:14:00.285 14:54:33 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:14:00.285 14:54:33 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:00.285 14:54:33 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:14:00.285 14:54:33 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:14:00.285 14:54:33 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:00.285 14:54:33 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:14:00.285 14:54:33 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:14:00.285 14:54:33 -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:14:00.285 14:54:33 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:14:00.285 14:54:33 -- common/autotest_common.sh@722 -- # xtrace_disable 00:14:00.285 14:54:33 -- common/autotest_common.sh@10 -- # set +x 00:14:00.285 14:54:33 -- nvmf/common.sh@469 -- # nvmfpid=82514 00:14:00.285 14:54:33 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:14:00.285 14:54:33 -- nvmf/common.sh@470 -- # waitforlisten 82514 00:14:00.285 14:54:33 -- common/autotest_common.sh@829 -- # '[' -z 82514 ']' 00:14:00.285 14:54:33 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:00.285 14:54:33 -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:00.285 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:00.285 14:54:33 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:00.285 14:54:33 -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:00.285 14:54:33 -- common/autotest_common.sh@10 -- # set +x 00:14:00.285 [2024-12-01 14:54:33.180705] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:14:00.285 [2024-12-01 14:54:33.180785] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:00.285 [2024-12-01 14:54:33.309132] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:14:00.285 [2024-12-01 14:54:33.387723] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:14:00.286 [2024-12-01 14:54:33.387890] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:00.286 [2024-12-01 14:54:33.387902] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:00.286 [2024-12-01 14:54:33.387910] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:00.286 [2024-12-01 14:54:33.388023] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:14:00.286 [2024-12-01 14:54:33.388035] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:01.231 14:54:34 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:01.231 14:54:34 -- common/autotest_common.sh@862 -- # return 0 00:14:01.231 14:54:34 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:14:01.231 14:54:34 -- common/autotest_common.sh@728 -- # xtrace_disable 00:14:01.231 14:54:34 -- common/autotest_common.sh@10 -- # set +x 00:14:01.231 14:54:34 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:01.231 14:54:34 -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:14:01.231 14:54:34 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:01.231 14:54:34 -- common/autotest_common.sh@10 -- # set +x 00:14:01.231 [2024-12-01 14:54:34.215288] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:01.231 14:54:34 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:01.231 14:54:34 -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:14:01.231 14:54:34 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:01.231 14:54:34 -- common/autotest_common.sh@10 -- # set +x 00:14:01.231 14:54:34 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:01.231 14:54:34 -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:01.231 14:54:34 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:01.231 14:54:34 -- common/autotest_common.sh@10 -- # set +x 00:14:01.231 [2024-12-01 14:54:34.231390] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:01.231 14:54:34 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:01.231 14:54:34 -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:14:01.231 14:54:34 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:01.231 14:54:34 -- common/autotest_common.sh@10 -- # set +x 00:14:01.231 NULL1 00:14:01.231 14:54:34 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:01.231 14:54:34 -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:14:01.231 14:54:34 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:01.231 14:54:34 -- common/autotest_common.sh@10 -- # set +x 00:14:01.231 Delay0 00:14:01.231 14:54:34 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:01.231 14:54:34 -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:01.231 14:54:34 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:01.231 14:54:34 -- common/autotest_common.sh@10 -- # set +x 00:14:01.231 14:54:34 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:01.231 14:54:34 -- target/delete_subsystem.sh@28 -- # perf_pid=82565 00:14:01.231 14:54:34 -- target/delete_subsystem.sh@30 -- # sleep 2 00:14:01.231 14:54:34 -- target/delete_subsystem.sh@26 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:14:01.489 [2024-12-01 14:54:34.426020] subsystem.c:1344:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:14:03.392 14:54:36 -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:03.392 14:54:36 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:03.393 14:54:36 -- common/autotest_common.sh@10 -- # set +x 00:14:03.393 Read completed with error (sct=0, sc=8) 00:14:03.393 Read completed with error (sct=0, sc=8) 00:14:03.393 Read completed with error (sct=0, sc=8) 00:14:03.393 starting I/O failed: -6 00:14:03.393 Read completed with error (sct=0, sc=8) 00:14:03.393 Read completed with error (sct=0, sc=8) 00:14:03.393 Read completed with error (sct=0, sc=8) 00:14:03.393 Write completed with error (sct=0, sc=8) 00:14:03.393 starting I/O failed: -6 00:14:03.393 Read completed with error (sct=0, sc=8) 00:14:03.393 Write completed with error (sct=0, sc=8) 00:14:03.393 Read completed with error (sct=0, sc=8) 00:14:03.393 Read completed with error (sct=0, sc=8) 00:14:03.393 starting I/O failed: -6 00:14:03.393 Read completed with error (sct=0, sc=8) 00:14:03.393 Read completed with error (sct=0, sc=8) 00:14:03.393 Read completed with error (sct=0, sc=8) 00:14:03.393 Write completed with error (sct=0, sc=8) 00:14:03.393 starting I/O failed: -6 00:14:03.393 Read completed with error (sct=0, sc=8) 00:14:03.393 Write completed with error (sct=0, sc=8) 00:14:03.393 Read completed with error (sct=0, sc=8) 00:14:03.393 Read completed with error (sct=0, sc=8) 00:14:03.393 starting I/O failed: -6 00:14:03.393 Read completed with error (sct=0, sc=8) 00:14:03.393 Read completed with error (sct=0, sc=8) 00:14:03.393 Read completed with error (sct=0, sc=8) 00:14:03.393 Write completed with error (sct=0, sc=8) 00:14:03.393 starting I/O failed: -6 00:14:03.393 Read completed with error (sct=0, sc=8) 00:14:03.393 Read completed with error (sct=0, sc=8) 00:14:03.393 Read completed with error (sct=0, sc=8) 00:14:03.393 Read completed with error (sct=0, sc=8) 00:14:03.393 starting I/O failed: -6 00:14:03.393 Read completed with error (sct=0, sc=8) 00:14:03.393 Write completed with error (sct=0, sc=8) 00:14:03.393 Read completed with error (sct=0, sc=8) 00:14:03.393 Read completed with error (sct=0, sc=8) 00:14:03.393 starting I/O failed: -6 00:14:03.393 Read completed with error (sct=0, sc=8) 00:14:03.393 Write completed with error (sct=0, sc=8) 00:14:03.393 Write completed with error (sct=0, sc=8) 00:14:03.393 Read completed with error (sct=0, sc=8) 00:14:03.393 starting I/O failed: -6 00:14:03.393 Read completed with error (sct=0, sc=8) 00:14:03.393 Write completed with error (sct=0, sc=8) 00:14:03.393 Read completed with error (sct=0, sc=8) 00:14:03.393 Read completed with error (sct=0, sc=8) 00:14:03.393 starting I/O failed: -6 00:14:03.393 [2024-12-01 14:54:36.466895] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15ca870 is same with the state(5) to be set 00:14:03.393 Read completed with error (sct=0, sc=8) 00:14:03.393 Read completed with error (sct=0, sc=8) 00:14:03.393 Read completed with error (sct=0, sc=8) 00:14:03.393 Read completed with error (sct=0, sc=8) 00:14:03.393 Write completed with error (sct=0, sc=8) 00:14:03.393 Read completed with error (sct=0, sc=8) 00:14:03.393 Read completed with error (sct=0, sc=8) 00:14:03.393 Read completed with error (sct=0, sc=8) 00:14:03.393 Read completed with error (sct=0, sc=8) 00:14:03.393 Read completed with error (sct=0, sc=8) 00:14:03.393 Read completed with error (sct=0, sc=8) 00:14:03.393 Read completed with error (sct=0, sc=8) 00:14:03.393 Read completed with error (sct=0, sc=8) 00:14:03.393 Read completed with error (sct=0, sc=8) 00:14:03.393 Read completed with error (sct=0, sc=8) 00:14:03.393 Write completed with error (sct=0, sc=8) 00:14:03.393 Write completed with error (sct=0, sc=8) 00:14:03.393 Write completed with error (sct=0, sc=8) 00:14:03.393 Read completed with error (sct=0, sc=8) 00:14:03.393 Read completed with error (sct=0, sc=8) 00:14:03.393 Read completed with error (sct=0, sc=8) 00:14:03.393 Read completed with error (sct=0, sc=8) 00:14:03.393 Read completed with error (sct=0, sc=8) 00:14:03.393 Read completed with error (sct=0, sc=8) 00:14:03.393 Write completed with error (sct=0, sc=8) 00:14:03.393 Write completed with error (sct=0, sc=8) 00:14:03.393 Read completed with error (sct=0, sc=8) 00:14:03.393 Read completed with error (sct=0, sc=8) 00:14:03.393 Read completed with error (sct=0, sc=8) 00:14:03.393 Write completed with error (sct=0, sc=8) 00:14:03.393 Read completed with error (sct=0, sc=8) 00:14:03.393 Read completed with error (sct=0, sc=8) 00:14:03.393 Read completed with error (sct=0, sc=8) 00:14:03.393 Read completed with error (sct=0, sc=8) 00:14:03.393 Read completed with error (sct=0, sc=8) 00:14:03.393 Read completed with error (sct=0, sc=8) 00:14:03.393 Read completed with error (sct=0, sc=8) 00:14:03.393 Read completed with error (sct=0, sc=8) 00:14:03.393 Read completed with error (sct=0, sc=8) 00:14:03.393 Write completed with error (sct=0, sc=8) 00:14:03.393 Write completed with error (sct=0, sc=8) 00:14:03.393 Read completed with error (sct=0, sc=8) 00:14:03.393 Read completed with error (sct=0, sc=8) 00:14:03.393 Write completed with error (sct=0, sc=8) 00:14:03.393 Read completed with error (sct=0, sc=8) 00:14:03.393 Write completed with error (sct=0, sc=8) 00:14:03.393 Write completed with error (sct=0, sc=8) 00:14:03.393 Read completed with error (sct=0, sc=8) 00:14:03.393 Read completed with error (sct=0, sc=8) 00:14:03.393 Read completed with error (sct=0, sc=8) 00:14:03.393 starting I/O failed: -6 00:14:03.393 Read completed with error (sct=0, sc=8) 00:14:03.393 Read completed with error (sct=0, sc=8) 00:14:03.393 Write completed with error (sct=0, sc=8) 00:14:03.393 Read completed with error (sct=0, sc=8) 00:14:03.393 starting I/O failed: -6 00:14:03.393 Read completed with error (sct=0, sc=8) 00:14:03.393 Read completed with error (sct=0, sc=8) 00:14:03.393 Read completed with error (sct=0, sc=8) 00:14:03.393 Read completed with error (sct=0, sc=8) 00:14:03.393 starting I/O failed: -6 00:14:03.393 Write completed with error (sct=0, sc=8) 00:14:03.393 Read completed with error (sct=0, sc=8) 00:14:03.393 Write completed with error (sct=0, sc=8) 00:14:03.393 Read completed with error (sct=0, sc=8) 00:14:03.393 starting I/O failed: -6 00:14:03.393 Read completed with error (sct=0, sc=8) 00:14:03.393 Write completed with error (sct=0, sc=8) 00:14:03.393 Read completed with error (sct=0, sc=8) 00:14:03.393 Read completed with error (sct=0, sc=8) 00:14:03.393 starting I/O failed: -6 00:14:03.393 Write completed with error (sct=0, sc=8) 00:14:03.393 Write completed with error (sct=0, sc=8) 00:14:03.393 Write completed with error (sct=0, sc=8) 00:14:03.393 Read completed with error (sct=0, sc=8) 00:14:03.393 starting I/O failed: -6 00:14:03.393 Read completed with error (sct=0, sc=8) 00:14:03.393 Write completed with error (sct=0, sc=8) 00:14:03.393 Read completed with error (sct=0, sc=8) 00:14:03.393 Read completed with error (sct=0, sc=8) 00:14:03.393 starting I/O failed: -6 00:14:03.393 Write completed with error (sct=0, sc=8) 00:14:03.393 Read completed with error (sct=0, sc=8) 00:14:03.393 Read completed with error (sct=0, sc=8) 00:14:03.393 Read completed with error (sct=0, sc=8) 00:14:03.393 starting I/O failed: -6 00:14:03.393 Read completed with error (sct=0, sc=8) 00:14:03.393 Read completed with error (sct=0, sc=8) 00:14:03.393 Read completed with error (sct=0, sc=8) 00:14:03.393 Read completed with error (sct=0, sc=8) 00:14:03.393 starting I/O failed: -6 00:14:03.393 Read completed with error (sct=0, sc=8) 00:14:03.393 Read completed with error (sct=0, sc=8) 00:14:03.393 Write completed with error (sct=0, sc=8) 00:14:03.393 Read completed with error (sct=0, sc=8) 00:14:03.393 starting I/O failed: -6 00:14:03.393 Write completed with error (sct=0, sc=8) 00:14:03.393 Read completed with error (sct=0, sc=8) 00:14:03.393 Write completed with error (sct=0, sc=8) 00:14:03.393 Write completed with error (sct=0, sc=8) 00:14:03.393 starting I/O failed: -6 00:14:03.393 [2024-12-01 14:54:36.468696] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fd05000c350 is same with the state(5) to be set 00:14:03.393 Read completed with error (sct=0, sc=8) 00:14:03.393 Read completed with error (sct=0, sc=8) 00:14:03.393 Write completed with error (sct=0, sc=8) 00:14:03.393 Write completed with error (sct=0, sc=8) 00:14:03.393 Read completed with error (sct=0, sc=8) 00:14:03.393 Read completed with error (sct=0, sc=8) 00:14:03.393 Write completed with error (sct=0, sc=8) 00:14:03.393 Read completed with error (sct=0, sc=8) 00:14:03.393 Write completed with error (sct=0, sc=8) 00:14:03.393 Read completed with error (sct=0, sc=8) 00:14:03.393 Read completed with error (sct=0, sc=8) 00:14:03.393 Read completed with error (sct=0, sc=8) 00:14:03.393 Read completed with error (sct=0, sc=8) 00:14:03.393 Read completed with error (sct=0, sc=8) 00:14:03.393 Read completed with error (sct=0, sc=8) 00:14:03.393 Write completed with error (sct=0, sc=8) 00:14:03.393 Read completed with error (sct=0, sc=8) 00:14:03.393 Read completed with error (sct=0, sc=8) 00:14:03.393 Write completed with error (sct=0, sc=8) 00:14:03.393 Write completed with error (sct=0, sc=8) 00:14:03.393 Read completed with error (sct=0, sc=8) 00:14:03.393 Read completed with error (sct=0, sc=8) 00:14:03.393 Write completed with error (sct=0, sc=8) 00:14:03.393 Read completed with error (sct=0, sc=8) 00:14:03.393 Read completed with error (sct=0, sc=8) 00:14:03.393 Write completed with error (sct=0, sc=8) 00:14:03.393 Read completed with error (sct=0, sc=8) 00:14:03.393 Read completed with error (sct=0, sc=8) 00:14:03.393 Read completed with error (sct=0, sc=8) 00:14:03.393 Write completed with error (sct=0, sc=8) 00:14:03.393 Write completed with error (sct=0, sc=8) 00:14:03.393 Read completed with error (sct=0, sc=8) 00:14:03.393 Read completed with error (sct=0, sc=8) 00:14:03.393 Write completed with error (sct=0, sc=8) 00:14:03.393 Read completed with error (sct=0, sc=8) 00:14:03.393 Read completed with error (sct=0, sc=8) 00:14:03.393 Write completed with error (sct=0, sc=8) 00:14:03.393 Read completed with error (sct=0, sc=8) 00:14:03.393 Read completed with error (sct=0, sc=8) 00:14:03.393 Read completed with error (sct=0, sc=8) 00:14:03.393 Read completed with error (sct=0, sc=8) 00:14:03.393 Read completed with error (sct=0, sc=8) 00:14:03.393 Read completed with error (sct=0, sc=8) 00:14:03.393 Write completed with error (sct=0, sc=8) 00:14:03.393 Read completed with error (sct=0, sc=8) 00:14:03.393 Read completed with error (sct=0, sc=8) 00:14:03.393 Write completed with error (sct=0, sc=8) 00:14:03.393 Read completed with error (sct=0, sc=8) 00:14:03.393 [2024-12-01 14:54:36.469001] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15cb120 is same with the state(5) to be set 00:14:04.330 [2024-12-01 14:54:37.439188] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c9070 is same with the state(5) to be set 00:14:04.589 Read completed with error (sct=0, sc=8) 00:14:04.589 Read completed with error (sct=0, sc=8) 00:14:04.589 Write completed with error (sct=0, sc=8) 00:14:04.589 Read completed with error (sct=0, sc=8) 00:14:04.589 Read completed with error (sct=0, sc=8) 00:14:04.589 Read completed with error (sct=0, sc=8) 00:14:04.589 Write completed with error (sct=0, sc=8) 00:14:04.589 Write completed with error (sct=0, sc=8) 00:14:04.589 Read completed with error (sct=0, sc=8) 00:14:04.589 Read completed with error (sct=0, sc=8) 00:14:04.589 Write completed with error (sct=0, sc=8) 00:14:04.589 Read completed with error (sct=0, sc=8) 00:14:04.589 Read completed with error (sct=0, sc=8) 00:14:04.589 Read completed with error (sct=0, sc=8) 00:14:04.589 Read completed with error (sct=0, sc=8) 00:14:04.589 Read completed with error (sct=0, sc=8) 00:14:04.589 Read completed with error (sct=0, sc=8) 00:14:04.589 Write completed with error (sct=0, sc=8) 00:14:04.589 Write completed with error (sct=0, sc=8) 00:14:04.589 Write completed with error (sct=0, sc=8) 00:14:04.589 Write completed with error (sct=0, sc=8) 00:14:04.589 [2024-12-01 14:54:37.468339] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15cae70 is same with the state(5) to be set 00:14:04.589 Read completed with error (sct=0, sc=8) 00:14:04.589 Read completed with error (sct=0, sc=8) 00:14:04.589 Read completed with error (sct=0, sc=8) 00:14:04.589 Read completed with error (sct=0, sc=8) 00:14:04.589 Read completed with error (sct=0, sc=8) 00:14:04.589 Write completed with error (sct=0, sc=8) 00:14:04.589 Read completed with error (sct=0, sc=8) 00:14:04.589 Read completed with error (sct=0, sc=8) 00:14:04.589 Write completed with error (sct=0, sc=8) 00:14:04.589 Read completed with error (sct=0, sc=8) 00:14:04.589 Write completed with error (sct=0, sc=8) 00:14:04.589 Read completed with error (sct=0, sc=8) 00:14:04.589 Read completed with error (sct=0, sc=8) 00:14:04.589 Read completed with error (sct=0, sc=8) 00:14:04.589 Write completed with error (sct=0, sc=8) 00:14:04.589 Read completed with error (sct=0, sc=8) 00:14:04.589 Read completed with error (sct=0, sc=8) 00:14:04.589 Write completed with error (sct=0, sc=8) 00:14:04.589 Read completed with error (sct=0, sc=8) 00:14:04.589 Read completed with error (sct=0, sc=8) 00:14:04.589 Read completed with error (sct=0, sc=8) 00:14:04.589 Read completed with error (sct=0, sc=8) 00:14:04.589 Write completed with error (sct=0, sc=8) 00:14:04.589 Read completed with error (sct=0, sc=8) 00:14:04.589 Write completed with error (sct=0, sc=8) 00:14:04.589 Write completed with error (sct=0, sc=8) 00:14:04.589 Write completed with error (sct=0, sc=8) 00:14:04.589 Read completed with error (sct=0, sc=8) 00:14:04.589 Write completed with error (sct=0, sc=8) 00:14:04.589 Read completed with error (sct=0, sc=8) 00:14:04.589 Read completed with error (sct=0, sc=8) 00:14:04.589 Read completed with error (sct=0, sc=8) 00:14:04.589 Read completed with error (sct=0, sc=8) 00:14:04.589 Read completed with error (sct=0, sc=8) 00:14:04.589 Read completed with error (sct=0, sc=8) 00:14:04.589 Read completed with error (sct=0, sc=8) 00:14:04.589 Write completed with error (sct=0, sc=8) 00:14:04.589 Read completed with error (sct=0, sc=8) 00:14:04.589 Read completed with error (sct=0, sc=8) 00:14:04.589 [2024-12-01 14:54:37.468712] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fd050000c00 is same with the state(5) to be set 00:14:04.589 Read completed with error (sct=0, sc=8) 00:14:04.589 Read completed with error (sct=0, sc=8) 00:14:04.589 Read completed with error (sct=0, sc=8) 00:14:04.589 Read completed with error (sct=0, sc=8) 00:14:04.589 Read completed with error (sct=0, sc=8) 00:14:04.589 Read completed with error (sct=0, sc=8) 00:14:04.589 Write completed with error (sct=0, sc=8) 00:14:04.589 Read completed with error (sct=0, sc=8) 00:14:04.589 Read completed with error (sct=0, sc=8) 00:14:04.589 Read completed with error (sct=0, sc=8) 00:14:04.589 Read completed with error (sct=0, sc=8) 00:14:04.589 Read completed with error (sct=0, sc=8) 00:14:04.589 Read completed with error (sct=0, sc=8) 00:14:04.589 Write completed with error (sct=0, sc=8) 00:14:04.589 Write completed with error (sct=0, sc=8) 00:14:04.589 Read completed with error (sct=0, sc=8) 00:14:04.589 Read completed with error (sct=0, sc=8) 00:14:04.589 Read completed with error (sct=0, sc=8) 00:14:04.589 Read completed with error (sct=0, sc=8) 00:14:04.589 Write completed with error (sct=0, sc=8) 00:14:04.589 Write completed with error (sct=0, sc=8) 00:14:04.589 Read completed with error (sct=0, sc=8) 00:14:04.589 Read completed with error (sct=0, sc=8) 00:14:04.589 Read completed with error (sct=0, sc=8) 00:14:04.589 Write completed with error (sct=0, sc=8) 00:14:04.589 Read completed with error (sct=0, sc=8) 00:14:04.589 Write completed with error (sct=0, sc=8) 00:14:04.589 Read completed with error (sct=0, sc=8) 00:14:04.589 Write completed with error (sct=0, sc=8) 00:14:04.589 Read completed with error (sct=0, sc=8) 00:14:04.589 Read completed with error (sct=0, sc=8) 00:14:04.589 Read completed with error (sct=0, sc=8) 00:14:04.589 Read completed with error (sct=0, sc=8) 00:14:04.589 Write completed with error (sct=0, sc=8) 00:14:04.589 Read completed with error (sct=0, sc=8) 00:14:04.589 Read completed with error (sct=0, sc=8) 00:14:04.589 Write completed with error (sct=0, sc=8) 00:14:04.589 Read completed with error (sct=0, sc=8) 00:14:04.589 Read completed with error (sct=0, sc=8) 00:14:04.589 [2024-12-01 14:54:37.470023] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fd05000c600 is same with the state(5) to be set 00:14:04.589 Read completed with error (sct=0, sc=8) 00:14:04.589 Read completed with error (sct=0, sc=8) 00:14:04.589 Read completed with error (sct=0, sc=8) 00:14:04.590 Read completed with error (sct=0, sc=8) 00:14:04.590 Read completed with error (sct=0, sc=8) 00:14:04.590 Read completed with error (sct=0, sc=8) 00:14:04.590 Read completed with error (sct=0, sc=8) 00:14:04.590 Read completed with error (sct=0, sc=8) 00:14:04.590 Write completed with error (sct=0, sc=8) 00:14:04.590 Read completed with error (sct=0, sc=8) 00:14:04.590 Read completed with error (sct=0, sc=8) 00:14:04.590 Read completed with error (sct=0, sc=8) 00:14:04.590 Read completed with error (sct=0, sc=8) 00:14:04.590 Read completed with error (sct=0, sc=8) 00:14:04.590 Write completed with error (sct=0, sc=8) 00:14:04.590 Read completed with error (sct=0, sc=8) 00:14:04.590 Write completed with error (sct=0, sc=8) 00:14:04.590 Write completed with error (sct=0, sc=8) 00:14:04.590 Read completed with error (sct=0, sc=8) 00:14:04.590 Write completed with error (sct=0, sc=8) 00:14:04.590 Write completed with error (sct=0, sc=8) 00:14:04.590 Write completed with error (sct=0, sc=8) 00:14:04.590 Read completed with error (sct=0, sc=8) 00:14:04.590 Read completed with error (sct=0, sc=8) 00:14:04.590 Read completed with error (sct=0, sc=8) 00:14:04.590 Read completed with error (sct=0, sc=8) 00:14:04.590 Read completed with error (sct=0, sc=8) 00:14:04.590 Write completed with error (sct=0, sc=8) 00:14:04.590 Read completed with error (sct=0, sc=8) 00:14:04.590 Read completed with error (sct=0, sc=8) 00:14:04.590 Write completed with error (sct=0, sc=8) 00:14:04.590 Read completed with error (sct=0, sc=8) 00:14:04.590 Read completed with error (sct=0, sc=8) 00:14:04.590 Write completed with error (sct=0, sc=8) 00:14:04.590 Write completed with error (sct=0, sc=8) 00:14:04.590 Write completed with error (sct=0, sc=8) 00:14:04.590 Write completed with error (sct=0, sc=8) 00:14:04.590 Read completed with error (sct=0, sc=8) 00:14:04.590 Read completed with error (sct=0, sc=8) 00:14:04.590 [2024-12-01 14:54:37.470298] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fd05000bf20 is same with the state(5) to be set 00:14:04.590 [2024-12-01 14:54:37.471396] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15c9070 (9): Bad file descriptor 00:14:04.590 /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf: errors occurred 00:14:04.590 14:54:37 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:04.590 14:54:37 -- target/delete_subsystem.sh@34 -- # delay=0 00:14:04.590 14:54:37 -- target/delete_subsystem.sh@35 -- # kill -0 82565 00:14:04.590 14:54:37 -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:14:04.590 Initializing NVMe Controllers 00:14:04.590 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:14:04.590 Controller IO queue size 128, less than required. 00:14:04.590 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:14:04.590 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:14:04.590 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:14:04.590 Initialization complete. Launching workers. 00:14:04.590 ======================================================== 00:14:04.590 Latency(us) 00:14:04.590 Device Information : IOPS MiB/s Average min max 00:14:04.590 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 154.06 0.08 898605.04 338.11 2003773.70 00:14:04.590 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 162.48 0.08 1145149.35 1316.61 2005230.64 00:14:04.590 ======================================================== 00:14:04.590 Total : 316.54 0.15 1025156.73 338.11 2005230.64 00:14:04.590 00:14:05.158 14:54:37 -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:14:05.158 14:54:37 -- target/delete_subsystem.sh@35 -- # kill -0 82565 00:14:05.158 /home/vagrant/spdk_repo/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (82565) - No such process 00:14:05.158 14:54:37 -- target/delete_subsystem.sh@45 -- # NOT wait 82565 00:14:05.158 14:54:37 -- common/autotest_common.sh@650 -- # local es=0 00:14:05.158 14:54:37 -- common/autotest_common.sh@652 -- # valid_exec_arg wait 82565 00:14:05.158 14:54:37 -- common/autotest_common.sh@638 -- # local arg=wait 00:14:05.158 14:54:37 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:05.158 14:54:37 -- common/autotest_common.sh@642 -- # type -t wait 00:14:05.158 14:54:37 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:05.158 14:54:37 -- common/autotest_common.sh@653 -- # wait 82565 00:14:05.158 14:54:37 -- common/autotest_common.sh@653 -- # es=1 00:14:05.158 14:54:37 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:14:05.158 14:54:37 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:14:05.158 14:54:37 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:14:05.158 14:54:37 -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:14:05.158 14:54:37 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:05.158 14:54:37 -- common/autotest_common.sh@10 -- # set +x 00:14:05.158 14:54:37 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:05.158 14:54:37 -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:05.158 14:54:37 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:05.158 14:54:37 -- common/autotest_common.sh@10 -- # set +x 00:14:05.158 [2024-12-01 14:54:37.995843] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:05.158 14:54:37 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:05.158 14:54:37 -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:05.158 14:54:37 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:05.158 14:54:37 -- common/autotest_common.sh@10 -- # set +x 00:14:05.158 14:54:38 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:05.158 14:54:38 -- target/delete_subsystem.sh@54 -- # perf_pid=82617 00:14:05.158 14:54:38 -- target/delete_subsystem.sh@56 -- # delay=0 00:14:05.158 14:54:38 -- target/delete_subsystem.sh@57 -- # kill -0 82617 00:14:05.158 14:54:38 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:14:05.158 14:54:38 -- target/delete_subsystem.sh@52 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:14:05.158 [2024-12-01 14:54:38.173597] subsystem.c:1344:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:14:05.416 14:54:38 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:14:05.416 14:54:38 -- target/delete_subsystem.sh@57 -- # kill -0 82617 00:14:05.416 14:54:38 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:14:05.984 14:54:39 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:14:05.984 14:54:39 -- target/delete_subsystem.sh@57 -- # kill -0 82617 00:14:05.984 14:54:39 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:14:06.550 14:54:39 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:14:06.550 14:54:39 -- target/delete_subsystem.sh@57 -- # kill -0 82617 00:14:06.550 14:54:39 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:14:07.118 14:54:40 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:14:07.118 14:54:40 -- target/delete_subsystem.sh@57 -- # kill -0 82617 00:14:07.118 14:54:40 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:14:07.686 14:54:40 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:14:07.686 14:54:40 -- target/delete_subsystem.sh@57 -- # kill -0 82617 00:14:07.686 14:54:40 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:14:07.946 14:54:41 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:14:07.946 14:54:41 -- target/delete_subsystem.sh@57 -- # kill -0 82617 00:14:07.946 14:54:41 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:14:08.205 Initializing NVMe Controllers 00:14:08.205 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:14:08.205 Controller IO queue size 128, less than required. 00:14:08.206 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:14:08.206 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:14:08.206 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:14:08.206 Initialization complete. Launching workers. 00:14:08.206 ======================================================== 00:14:08.206 Latency(us) 00:14:08.206 Device Information : IOPS MiB/s Average min max 00:14:08.206 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1004275.01 1000147.69 1014944.79 00:14:08.206 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1006782.51 1000390.71 1018304.40 00:14:08.206 ======================================================== 00:14:08.206 Total : 256.00 0.12 1005528.76 1000147.69 1018304.40 00:14:08.206 00:14:08.464 14:54:41 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:14:08.464 14:54:41 -- target/delete_subsystem.sh@57 -- # kill -0 82617 00:14:08.464 /home/vagrant/spdk_repo/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (82617) - No such process 00:14:08.464 14:54:41 -- target/delete_subsystem.sh@67 -- # wait 82617 00:14:08.464 14:54:41 -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:14:08.464 14:54:41 -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:14:08.464 14:54:41 -- nvmf/common.sh@476 -- # nvmfcleanup 00:14:08.464 14:54:41 -- nvmf/common.sh@116 -- # sync 00:14:08.723 14:54:41 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:14:08.723 14:54:41 -- nvmf/common.sh@119 -- # set +e 00:14:08.723 14:54:41 -- nvmf/common.sh@120 -- # for i in {1..20} 00:14:08.723 14:54:41 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:14:08.723 rmmod nvme_tcp 00:14:08.723 rmmod nvme_fabrics 00:14:08.723 rmmod nvme_keyring 00:14:08.723 14:54:41 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:14:08.723 14:54:41 -- nvmf/common.sh@123 -- # set -e 00:14:08.723 14:54:41 -- nvmf/common.sh@124 -- # return 0 00:14:08.723 14:54:41 -- nvmf/common.sh@477 -- # '[' -n 82514 ']' 00:14:08.723 14:54:41 -- nvmf/common.sh@478 -- # killprocess 82514 00:14:08.723 14:54:41 -- common/autotest_common.sh@936 -- # '[' -z 82514 ']' 00:14:08.723 14:54:41 -- common/autotest_common.sh@940 -- # kill -0 82514 00:14:08.723 14:54:41 -- common/autotest_common.sh@941 -- # uname 00:14:08.723 14:54:41 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:14:08.723 14:54:41 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 82514 00:14:08.723 14:54:41 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:14:08.723 killing process with pid 82514 00:14:08.723 14:54:41 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:14:08.723 14:54:41 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 82514' 00:14:08.723 14:54:41 -- common/autotest_common.sh@955 -- # kill 82514 00:14:08.723 14:54:41 -- common/autotest_common.sh@960 -- # wait 82514 00:14:08.983 14:54:41 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:14:08.983 14:54:41 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:14:08.983 14:54:41 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:14:08.983 14:54:41 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:08.983 14:54:41 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:14:08.983 14:54:41 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:08.983 14:54:41 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:08.983 14:54:41 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:08.983 14:54:41 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:14:08.983 00:14:08.983 real 0m9.376s 00:14:08.983 user 0m29.222s 00:14:08.983 sys 0m1.186s 00:14:08.983 14:54:41 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:14:08.983 14:54:41 -- common/autotest_common.sh@10 -- # set +x 00:14:08.983 ************************************ 00:14:08.983 END TEST nvmf_delete_subsystem 00:14:08.983 ************************************ 00:14:08.983 14:54:42 -- nvmf/nvmf.sh@36 -- # [[ 0 -eq 1 ]] 00:14:08.983 14:54:42 -- nvmf/nvmf.sh@39 -- # [[ 0 -eq 1 ]] 00:14:08.983 14:54:42 -- nvmf/nvmf.sh@46 -- # run_test nvmf_host_management /home/vagrant/spdk_repo/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:14:08.983 14:54:42 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:14:08.983 14:54:42 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:14:08.983 14:54:42 -- common/autotest_common.sh@10 -- # set +x 00:14:08.983 ************************************ 00:14:08.983 START TEST nvmf_host_management 00:14:08.983 ************************************ 00:14:08.983 14:54:42 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:14:09.243 * Looking for test storage... 00:14:09.243 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:14:09.243 14:54:42 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:14:09.243 14:54:42 -- common/autotest_common.sh@1690 -- # lcov --version 00:14:09.243 14:54:42 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:14:09.243 14:54:42 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:14:09.243 14:54:42 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:14:09.243 14:54:42 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:14:09.243 14:54:42 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:14:09.243 14:54:42 -- scripts/common.sh@335 -- # IFS=.-: 00:14:09.243 14:54:42 -- scripts/common.sh@335 -- # read -ra ver1 00:14:09.243 14:54:42 -- scripts/common.sh@336 -- # IFS=.-: 00:14:09.243 14:54:42 -- scripts/common.sh@336 -- # read -ra ver2 00:14:09.243 14:54:42 -- scripts/common.sh@337 -- # local 'op=<' 00:14:09.243 14:54:42 -- scripts/common.sh@339 -- # ver1_l=2 00:14:09.243 14:54:42 -- scripts/common.sh@340 -- # ver2_l=1 00:14:09.243 14:54:42 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:14:09.243 14:54:42 -- scripts/common.sh@343 -- # case "$op" in 00:14:09.243 14:54:42 -- scripts/common.sh@344 -- # : 1 00:14:09.243 14:54:42 -- scripts/common.sh@363 -- # (( v = 0 )) 00:14:09.243 14:54:42 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:09.243 14:54:42 -- scripts/common.sh@364 -- # decimal 1 00:14:09.243 14:54:42 -- scripts/common.sh@352 -- # local d=1 00:14:09.243 14:54:42 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:09.243 14:54:42 -- scripts/common.sh@354 -- # echo 1 00:14:09.243 14:54:42 -- scripts/common.sh@364 -- # ver1[v]=1 00:14:09.243 14:54:42 -- scripts/common.sh@365 -- # decimal 2 00:14:09.243 14:54:42 -- scripts/common.sh@352 -- # local d=2 00:14:09.243 14:54:42 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:09.243 14:54:42 -- scripts/common.sh@354 -- # echo 2 00:14:09.243 14:54:42 -- scripts/common.sh@365 -- # ver2[v]=2 00:14:09.243 14:54:42 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:14:09.243 14:54:42 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:14:09.243 14:54:42 -- scripts/common.sh@367 -- # return 0 00:14:09.243 14:54:42 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:09.243 14:54:42 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:14:09.243 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:09.243 --rc genhtml_branch_coverage=1 00:14:09.243 --rc genhtml_function_coverage=1 00:14:09.243 --rc genhtml_legend=1 00:14:09.243 --rc geninfo_all_blocks=1 00:14:09.243 --rc geninfo_unexecuted_blocks=1 00:14:09.243 00:14:09.243 ' 00:14:09.243 14:54:42 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:14:09.243 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:09.243 --rc genhtml_branch_coverage=1 00:14:09.243 --rc genhtml_function_coverage=1 00:14:09.243 --rc genhtml_legend=1 00:14:09.243 --rc geninfo_all_blocks=1 00:14:09.243 --rc geninfo_unexecuted_blocks=1 00:14:09.243 00:14:09.243 ' 00:14:09.243 14:54:42 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:14:09.243 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:09.243 --rc genhtml_branch_coverage=1 00:14:09.243 --rc genhtml_function_coverage=1 00:14:09.243 --rc genhtml_legend=1 00:14:09.243 --rc geninfo_all_blocks=1 00:14:09.243 --rc geninfo_unexecuted_blocks=1 00:14:09.243 00:14:09.243 ' 00:14:09.243 14:54:42 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:14:09.243 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:09.243 --rc genhtml_branch_coverage=1 00:14:09.243 --rc genhtml_function_coverage=1 00:14:09.243 --rc genhtml_legend=1 00:14:09.243 --rc geninfo_all_blocks=1 00:14:09.243 --rc geninfo_unexecuted_blocks=1 00:14:09.243 00:14:09.243 ' 00:14:09.243 14:54:42 -- target/host_management.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:14:09.243 14:54:42 -- nvmf/common.sh@7 -- # uname -s 00:14:09.243 14:54:42 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:09.243 14:54:42 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:09.243 14:54:42 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:09.243 14:54:42 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:09.243 14:54:42 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:09.243 14:54:42 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:09.243 14:54:42 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:09.243 14:54:42 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:09.243 14:54:42 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:09.243 14:54:42 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:09.243 14:54:42 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:2d843004-a791-47f3-8dd7-3d04462c368b 00:14:09.243 14:54:42 -- nvmf/common.sh@18 -- # NVME_HOSTID=2d843004-a791-47f3-8dd7-3d04462c368b 00:14:09.243 14:54:42 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:09.243 14:54:42 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:09.243 14:54:42 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:14:09.243 14:54:42 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:14:09.243 14:54:42 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:09.243 14:54:42 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:09.243 14:54:42 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:09.243 14:54:42 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:09.243 14:54:42 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:09.243 14:54:42 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:09.243 14:54:42 -- paths/export.sh@5 -- # export PATH 00:14:09.243 14:54:42 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:09.243 14:54:42 -- nvmf/common.sh@46 -- # : 0 00:14:09.243 14:54:42 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:14:09.243 14:54:42 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:14:09.243 14:54:42 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:14:09.243 14:54:42 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:09.243 14:54:42 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:09.243 14:54:42 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:14:09.243 14:54:42 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:14:09.243 14:54:42 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:14:09.243 14:54:42 -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:14:09.243 14:54:42 -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:14:09.243 14:54:42 -- target/host_management.sh@104 -- # nvmftestinit 00:14:09.243 14:54:42 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:14:09.243 14:54:42 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:09.243 14:54:42 -- nvmf/common.sh@436 -- # prepare_net_devs 00:14:09.243 14:54:42 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:14:09.243 14:54:42 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:14:09.243 14:54:42 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:09.243 14:54:42 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:09.243 14:54:42 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:09.243 14:54:42 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:14:09.243 14:54:42 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:14:09.243 14:54:42 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:14:09.243 14:54:42 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:14:09.243 14:54:42 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:14:09.243 14:54:42 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:14:09.243 14:54:42 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:09.243 14:54:42 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:09.243 14:54:42 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:14:09.243 14:54:42 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:14:09.243 14:54:42 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:14:09.243 14:54:42 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:14:09.243 14:54:42 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:14:09.244 14:54:42 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:09.244 14:54:42 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:14:09.244 14:54:42 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:14:09.244 14:54:42 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:14:09.244 14:54:42 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:14:09.244 14:54:42 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:14:09.244 14:54:42 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:14:09.244 Cannot find device "nvmf_tgt_br" 00:14:09.244 14:54:42 -- nvmf/common.sh@154 -- # true 00:14:09.244 14:54:42 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:14:09.244 Cannot find device "nvmf_tgt_br2" 00:14:09.244 14:54:42 -- nvmf/common.sh@155 -- # true 00:14:09.244 14:54:42 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:14:09.244 14:54:42 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:14:09.244 Cannot find device "nvmf_tgt_br" 00:14:09.244 14:54:42 -- nvmf/common.sh@157 -- # true 00:14:09.244 14:54:42 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:14:09.244 Cannot find device "nvmf_tgt_br2" 00:14:09.244 14:54:42 -- nvmf/common.sh@158 -- # true 00:14:09.244 14:54:42 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:14:09.244 14:54:42 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:14:09.244 14:54:42 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:14:09.503 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:09.503 14:54:42 -- nvmf/common.sh@161 -- # true 00:14:09.503 14:54:42 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:14:09.503 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:09.503 14:54:42 -- nvmf/common.sh@162 -- # true 00:14:09.503 14:54:42 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:14:09.503 14:54:42 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:14:09.503 14:54:42 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:14:09.503 14:54:42 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:14:09.503 14:54:42 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:14:09.503 14:54:42 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:14:09.503 14:54:42 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:14:09.503 14:54:42 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:14:09.503 14:54:42 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:14:09.503 14:54:42 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:14:09.503 14:54:42 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:14:09.503 14:54:42 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:14:09.503 14:54:42 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:14:09.503 14:54:42 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:14:09.503 14:54:42 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:14:09.503 14:54:42 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:14:09.503 14:54:42 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:14:09.503 14:54:42 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:14:09.503 14:54:42 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:14:09.503 14:54:42 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:14:09.503 14:54:42 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:14:09.503 14:54:42 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:14:09.503 14:54:42 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:14:09.503 14:54:42 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:14:09.503 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:09.503 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.080 ms 00:14:09.503 00:14:09.503 --- 10.0.0.2 ping statistics --- 00:14:09.503 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:09.503 rtt min/avg/max/mdev = 0.080/0.080/0.080/0.000 ms 00:14:09.503 14:54:42 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:14:09.503 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:14:09.503 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.038 ms 00:14:09.503 00:14:09.503 --- 10.0.0.3 ping statistics --- 00:14:09.503 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:09.503 rtt min/avg/max/mdev = 0.038/0.038/0.038/0.000 ms 00:14:09.503 14:54:42 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:14:09.503 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:09.503 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.026 ms 00:14:09.503 00:14:09.503 --- 10.0.0.1 ping statistics --- 00:14:09.503 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:09.503 rtt min/avg/max/mdev = 0.026/0.026/0.026/0.000 ms 00:14:09.503 14:54:42 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:09.503 14:54:42 -- nvmf/common.sh@421 -- # return 0 00:14:09.503 14:54:42 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:14:09.503 14:54:42 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:09.503 14:54:42 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:14:09.503 14:54:42 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:14:09.503 14:54:42 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:09.503 14:54:42 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:14:09.503 14:54:42 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:14:09.503 14:54:42 -- target/host_management.sh@106 -- # run_test nvmf_host_management nvmf_host_management 00:14:09.503 14:54:42 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:14:09.503 14:54:42 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:14:09.503 14:54:42 -- common/autotest_common.sh@10 -- # set +x 00:14:09.503 ************************************ 00:14:09.503 START TEST nvmf_host_management 00:14:09.503 ************************************ 00:14:09.503 14:54:42 -- common/autotest_common.sh@1114 -- # nvmf_host_management 00:14:09.503 14:54:42 -- target/host_management.sh@69 -- # starttarget 00:14:09.503 14:54:42 -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:14:09.503 14:54:42 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:14:09.503 14:54:42 -- common/autotest_common.sh@722 -- # xtrace_disable 00:14:09.503 14:54:42 -- common/autotest_common.sh@10 -- # set +x 00:14:09.503 14:54:42 -- nvmf/common.sh@469 -- # nvmfpid=82860 00:14:09.503 14:54:42 -- nvmf/common.sh@470 -- # waitforlisten 82860 00:14:09.503 14:54:42 -- common/autotest_common.sh@829 -- # '[' -z 82860 ']' 00:14:09.503 14:54:42 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:09.503 14:54:42 -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:09.503 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:09.503 14:54:42 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:09.503 14:54:42 -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:09.503 14:54:42 -- common/autotest_common.sh@10 -- # set +x 00:14:09.503 14:54:42 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:14:09.762 [2024-12-01 14:54:42.633192] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:14:09.762 [2024-12-01 14:54:42.633283] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:09.762 [2024-12-01 14:54:42.771504] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:09.762 [2024-12-01 14:54:42.829501] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:14:09.762 [2024-12-01 14:54:42.829647] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:09.762 [2024-12-01 14:54:42.829658] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:09.762 [2024-12-01 14:54:42.829667] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:09.762 [2024-12-01 14:54:42.829733] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:14:09.762 [2024-12-01 14:54:42.830316] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:14:09.762 [2024-12-01 14:54:42.830490] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:14:09.762 [2024-12-01 14:54:42.830498] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:14:10.700 14:54:43 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:10.700 14:54:43 -- common/autotest_common.sh@862 -- # return 0 00:14:10.700 14:54:43 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:14:10.700 14:54:43 -- common/autotest_common.sh@728 -- # xtrace_disable 00:14:10.700 14:54:43 -- common/autotest_common.sh@10 -- # set +x 00:14:10.700 14:54:43 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:10.700 14:54:43 -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:14:10.700 14:54:43 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:10.700 14:54:43 -- common/autotest_common.sh@10 -- # set +x 00:14:10.700 [2024-12-01 14:54:43.708220] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:10.700 14:54:43 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:10.700 14:54:43 -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:14:10.700 14:54:43 -- common/autotest_common.sh@722 -- # xtrace_disable 00:14:10.700 14:54:43 -- common/autotest_common.sh@10 -- # set +x 00:14:10.700 14:54:43 -- target/host_management.sh@22 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpcs.txt 00:14:10.700 14:54:43 -- target/host_management.sh@23 -- # cat 00:14:10.700 14:54:43 -- target/host_management.sh@30 -- # rpc_cmd 00:14:10.700 14:54:43 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:10.700 14:54:43 -- common/autotest_common.sh@10 -- # set +x 00:14:10.700 Malloc0 00:14:10.700 [2024-12-01 14:54:43.788964] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:10.700 14:54:43 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:10.700 14:54:43 -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:14:10.700 14:54:43 -- common/autotest_common.sh@728 -- # xtrace_disable 00:14:10.700 14:54:43 -- common/autotest_common.sh@10 -- # set +x 00:14:10.960 14:54:43 -- target/host_management.sh@73 -- # perfpid=82932 00:14:10.960 14:54:43 -- target/host_management.sh@74 -- # waitforlisten 82932 /var/tmp/bdevperf.sock 00:14:10.960 14:54:43 -- common/autotest_common.sh@829 -- # '[' -z 82932 ']' 00:14:10.960 14:54:43 -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:14:10.960 14:54:43 -- target/host_management.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:14:10.960 14:54:43 -- nvmf/common.sh@520 -- # config=() 00:14:10.960 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:14:10.960 14:54:43 -- nvmf/common.sh@520 -- # local subsystem config 00:14:10.960 14:54:43 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:14:10.960 14:54:43 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:14:10.960 14:54:43 -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:10.960 14:54:43 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:14:10.960 { 00:14:10.960 "params": { 00:14:10.960 "name": "Nvme$subsystem", 00:14:10.960 "trtype": "$TEST_TRANSPORT", 00:14:10.960 "traddr": "$NVMF_FIRST_TARGET_IP", 00:14:10.960 "adrfam": "ipv4", 00:14:10.960 "trsvcid": "$NVMF_PORT", 00:14:10.960 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:14:10.960 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:14:10.960 "hdgst": ${hdgst:-false}, 00:14:10.960 "ddgst": ${ddgst:-false} 00:14:10.960 }, 00:14:10.960 "method": "bdev_nvme_attach_controller" 00:14:10.960 } 00:14:10.960 EOF 00:14:10.960 )") 00:14:10.960 14:54:43 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:14:10.960 14:54:43 -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:10.960 14:54:43 -- common/autotest_common.sh@10 -- # set +x 00:14:10.960 14:54:43 -- nvmf/common.sh@542 -- # cat 00:14:10.960 14:54:43 -- nvmf/common.sh@544 -- # jq . 00:14:10.960 14:54:43 -- nvmf/common.sh@545 -- # IFS=, 00:14:10.960 14:54:43 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:14:10.960 "params": { 00:14:10.960 "name": "Nvme0", 00:14:10.960 "trtype": "tcp", 00:14:10.960 "traddr": "10.0.0.2", 00:14:10.960 "adrfam": "ipv4", 00:14:10.960 "trsvcid": "4420", 00:14:10.960 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:14:10.960 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:14:10.960 "hdgst": false, 00:14:10.960 "ddgst": false 00:14:10.960 }, 00:14:10.960 "method": "bdev_nvme_attach_controller" 00:14:10.960 }' 00:14:10.960 [2024-12-01 14:54:43.898109] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:14:10.960 [2024-12-01 14:54:43.898194] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82932 ] 00:14:10.960 [2024-12-01 14:54:44.038892] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:11.219 [2024-12-01 14:54:44.113289] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:11.220 Running I/O for 10 seconds... 00:14:12.158 14:54:44 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:12.158 14:54:44 -- common/autotest_common.sh@862 -- # return 0 00:14:12.158 14:54:44 -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:14:12.158 14:54:44 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:12.158 14:54:44 -- common/autotest_common.sh@10 -- # set +x 00:14:12.158 14:54:44 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:12.158 14:54:44 -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:14:12.158 14:54:44 -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:14:12.158 14:54:44 -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:14:12.158 14:54:44 -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:14:12.158 14:54:44 -- target/host_management.sh@52 -- # local ret=1 00:14:12.158 14:54:44 -- target/host_management.sh@53 -- # local i 00:14:12.158 14:54:44 -- target/host_management.sh@54 -- # (( i = 10 )) 00:14:12.158 14:54:44 -- target/host_management.sh@54 -- # (( i != 0 )) 00:14:12.158 14:54:44 -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:14:12.158 14:54:44 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:12.158 14:54:44 -- common/autotest_common.sh@10 -- # set +x 00:14:12.158 14:54:44 -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:14:12.158 14:54:44 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:12.159 14:54:45 -- target/host_management.sh@55 -- # read_io_count=2476 00:14:12.159 14:54:45 -- target/host_management.sh@58 -- # '[' 2476 -ge 100 ']' 00:14:12.159 14:54:45 -- target/host_management.sh@59 -- # ret=0 00:14:12.159 14:54:45 -- target/host_management.sh@60 -- # break 00:14:12.159 14:54:45 -- target/host_management.sh@64 -- # return 0 00:14:12.159 14:54:45 -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:14:12.159 14:54:45 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:12.159 14:54:45 -- common/autotest_common.sh@10 -- # set +x 00:14:12.159 [2024-12-01 14:54:45.015337] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x127fe70 is same with the state(5) to be set 00:14:12.159 [2024-12-01 14:54:45.015414] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x127fe70 is same with the state(5) to be set 00:14:12.159 [2024-12-01 14:54:45.015442] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x127fe70 is same with the state(5) to be set 00:14:12.159 [2024-12-01 14:54:45.015450] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x127fe70 is same with the state(5) to be set 00:14:12.159 [2024-12-01 14:54:45.015457] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x127fe70 is same with the state(5) to be set 00:14:12.159 [2024-12-01 14:54:45.015465] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x127fe70 is same with the state(5) to be set 00:14:12.159 [2024-12-01 14:54:45.015473] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x127fe70 is same with the state(5) to be set 00:14:12.159 [2024-12-01 14:54:45.015480] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x127fe70 is same with the state(5) to be set 00:14:12.159 [2024-12-01 14:54:45.015488] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x127fe70 is same with the state(5) to be set 00:14:12.159 [2024-12-01 14:54:45.015495] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x127fe70 is same with the state(5) to be set 00:14:12.159 [2024-12-01 14:54:45.015502] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x127fe70 is same with the state(5) to be set 00:14:12.159 [2024-12-01 14:54:45.015510] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x127fe70 is same with the state(5) to be set 00:14:12.159 [2024-12-01 14:54:45.015517] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x127fe70 is same with the state(5) to be set 00:14:12.159 [2024-12-01 14:54:45.015525] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x127fe70 is same with the state(5) to be set 00:14:12.159 [2024-12-01 14:54:45.015533] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x127fe70 is same with the state(5) to be set 00:14:12.159 [2024-12-01 14:54:45.015540] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x127fe70 is same with the state(5) to be set 00:14:12.159 [2024-12-01 14:54:45.015548] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x127fe70 is same with the state(5) to be set 00:14:12.159 [2024-12-01 14:54:45.015555] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x127fe70 is same with the state(5) to be set 00:14:12.159 [2024-12-01 14:54:45.015563] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x127fe70 is same with the state(5) to be set 00:14:12.159 [2024-12-01 14:54:45.015570] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x127fe70 is same with the state(5) to be set 00:14:12.159 [2024-12-01 14:54:45.015577] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x127fe70 is same with the state(5) to be set 00:14:12.159 [2024-12-01 14:54:45.015585] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x127fe70 is same with the state(5) to be set 00:14:12.159 [2024-12-01 14:54:45.015592] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x127fe70 is same with the state(5) to be set 00:14:12.159 [2024-12-01 14:54:45.015600] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x127fe70 is same with the state(5) to be set 00:14:12.159 [2024-12-01 14:54:45.015608] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x127fe70 is same with the state(5) to be set 00:14:12.159 [2024-12-01 14:54:45.015615] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x127fe70 is same with the state(5) to be set 00:14:12.159 [2024-12-01 14:54:45.015623] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x127fe70 is same with the state(5) to be set 00:14:12.159 [2024-12-01 14:54:45.015646] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x127fe70 is same with the state(5) to be set 00:14:12.159 [2024-12-01 14:54:45.015670] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x127fe70 is same with the state(5) to be set 00:14:12.159 [2024-12-01 14:54:45.015678] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x127fe70 is same with the state(5) to be set 00:14:12.159 [2024-12-01 14:54:45.015686] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x127fe70 is same with the state(5) to be set 00:14:12.159 [2024-12-01 14:54:45.015699] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x127fe70 is same with the state(5) to be set 00:14:12.159 [2024-12-01 14:54:45.015706] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x127fe70 is same with the state(5) to be set 00:14:12.159 [2024-12-01 14:54:45.015714] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x127fe70 is same with the state(5) to be set 00:14:12.159 [2024-12-01 14:54:45.015722] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x127fe70 is same with the state(5) to be set 00:14:12.159 [2024-12-01 14:54:45.015739] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x127fe70 is same with the state(5) to be set 00:14:12.159 [2024-12-01 14:54:45.015746] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x127fe70 is same with the state(5) to be set 00:14:12.159 [2024-12-01 14:54:45.015754] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x127fe70 is same with the state(5) to be set 00:14:12.159 [2024-12-01 14:54:45.015778] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x127fe70 is same with the state(5) to be set 00:14:12.159 [2024-12-01 14:54:45.015786] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x127fe70 is same with the state(5) to be set 00:14:12.159 [2024-12-01 14:54:45.015794] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x127fe70 is same with the state(5) to be set 00:14:12.159 [2024-12-01 14:54:45.015802] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x127fe70 is same with the state(5) to be set 00:14:12.159 [2024-12-01 14:54:45.015809] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x127fe70 is same with the state(5) to be set 00:14:12.159 [2024-12-01 14:54:45.015830] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x127fe70 is same with the state(5) to be set 00:14:12.159 [2024-12-01 14:54:45.015838] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x127fe70 is same with the state(5) to be set 00:14:12.159 [2024-12-01 14:54:45.015848] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x127fe70 is same with the state(5) to be set 00:14:12.159 [2024-12-01 14:54:45.015856] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x127fe70 is same with the state(5) to be set 00:14:12.159 [2024-12-01 14:54:45.016424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:81920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:12.159 [2024-12-01 14:54:45.016466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:12.159 [2024-12-01 14:54:45.016490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:82176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:12.159 [2024-12-01 14:54:45.016501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:12.159 [2024-12-01 14:54:45.016512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:82304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:12.159 [2024-12-01 14:54:45.016521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:12.159 [2024-12-01 14:54:45.016532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:82432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:12.159 [2024-12-01 14:54:45.016541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:12.159 [2024-12-01 14:54:45.016551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:82560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:12.159 [2024-12-01 14:54:45.016560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:12.159 [2024-12-01 14:54:45.016571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:77056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:12.159 [2024-12-01 14:54:45.016579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:12.159 [2024-12-01 14:54:45.016589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:82688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:12.159 [2024-12-01 14:54:45.016598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:12.159 [2024-12-01 14:54:45.016608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:77184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:12.159 [2024-12-01 14:54:45.016617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:12.159 [2024-12-01 14:54:45.016627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:77440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:12.159 [2024-12-01 14:54:45.016636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:12.159 [2024-12-01 14:54:45.016646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:77568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:12.159 [2024-12-01 14:54:45.016670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:12.159 [2024-12-01 14:54:45.016681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:77696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:12.159 [2024-12-01 14:54:45.016690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:12.159 [2024-12-01 14:54:45.016700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:82816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:12.159 [2024-12-01 14:54:45.016709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:12.159 [2024-12-01 14:54:45.016721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:82944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:12.159 [2024-12-01 14:54:45.016729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:12.159 [2024-12-01 14:54:45.016740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:83072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:12.159 [2024-12-01 14:54:45.016749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:12.159 [2024-12-01 14:54:45.016775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:83200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:12.159 [2024-12-01 14:54:45.016799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:12.159 [2024-12-01 14:54:45.016812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:78080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:12.159 [2024-12-01 14:54:45.016821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:12.159 [2024-12-01 14:54:45.016833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:78336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:12.159 [2024-12-01 14:54:45.016842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:12.159 [2024-12-01 14:54:45.016854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:78464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:12.159 [2024-12-01 14:54:45.016862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:12.159 [2024-12-01 14:54:45.016874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:78720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:12.159 [2024-12-01 14:54:45.016883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:12.159 [2024-12-01 14:54:45.016894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:83328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:12.159 [2024-12-01 14:54:45.016903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:12.159 [2024-12-01 14:54:45.016914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:83456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:12.159 [2024-12-01 14:54:45.016923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:12.159 [2024-12-01 14:54:45.016934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:83584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:12.159 [2024-12-01 14:54:45.016944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:12.159 [2024-12-01 14:54:45.016955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:83712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:12.159 [2024-12-01 14:54:45.016964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:12.159 [2024-12-01 14:54:45.016974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:83840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:12.159 [2024-12-01 14:54:45.016984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:12.159 [2024-12-01 14:54:45.016995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:83968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:12.159 [2024-12-01 14:54:45.017003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:12.159 [2024-12-01 14:54:45.017014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:84096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:12.159 [2024-12-01 14:54:45.017023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:12.159 [2024-12-01 14:54:45.017034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:84224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:12.159 [2024-12-01 14:54:45.017043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:12.159 [2024-12-01 14:54:45.017054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:78848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:12.159 [2024-12-01 14:54:45.017064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:12.159 [2024-12-01 14:54:45.017091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:78976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:12.159 [2024-12-01 14:54:45.017101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:12.159 [2024-12-01 14:54:45.017112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:84352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:12.159 [2024-12-01 14:54:45.017121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:12.159 [2024-12-01 14:54:45.017132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:84480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:12.159 [2024-12-01 14:54:45.017143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:12.159 [2024-12-01 14:54:45.017154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:84608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:12.159 [2024-12-01 14:54:45.017163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:12.159 [2024-12-01 14:54:45.017174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:84736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:12.159 [2024-12-01 14:54:45.017184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:12.159 [2024-12-01 14:54:45.017194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:79104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:12.159 [2024-12-01 14:54:45.017203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:12.159 [2024-12-01 14:54:45.017214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:84864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:12.159 [2024-12-01 14:54:45.017223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:12.159 [2024-12-01 14:54:45.017234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:79232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:12.159 [2024-12-01 14:54:45.017243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:12.159 [2024-12-01 14:54:45.017254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:79360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:12.159 [2024-12-01 14:54:45.017263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:12.159 [2024-12-01 14:54:45.017273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:79616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:12.159 [2024-12-01 14:54:45.017282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:12.159 [2024-12-01 14:54:45.017293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:84992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:12.159 [2024-12-01 14:54:45.017302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:12.159 [2024-12-01 14:54:45.017313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:85120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:12.159 [2024-12-01 14:54:45.017322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:12.159 [2024-12-01 14:54:45.017332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:85248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:12.159 [2024-12-01 14:54:45.017341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:12.159 [2024-12-01 14:54:45.017382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:85376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:12.159 [2024-12-01 14:54:45.017391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:12.159 [2024-12-01 14:54:45.017400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:85504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:12.159 [2024-12-01 14:54:45.017408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:12.159 [2024-12-01 14:54:45.017418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:85632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:12.159 [2024-12-01 14:54:45.017428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:12.159 [2024-12-01 14:54:45.017440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:79744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:12.159 [2024-12-01 14:54:45.017448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:12.160 [2024-12-01 14:54:45.017458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:85760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:12.160 [2024-12-01 14:54:45.017467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:12.160 [2024-12-01 14:54:45.017477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:85888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:12.160 [2024-12-01 14:54:45.017486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:12.160 [2024-12-01 14:54:45.017497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:80000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:12.160 [2024-12-01 14:54:45.017505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:12.160 [2024-12-01 14:54:45.017517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:80256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:12.160 [2024-12-01 14:54:45.017526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:12.160 [2024-12-01 14:54:45.017536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:80512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:12.160 [2024-12-01 14:54:45.017545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:12.160 [2024-12-01 14:54:45.017555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:86016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:12.160 [2024-12-01 14:54:45.017563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:12.160 [2024-12-01 14:54:45.017573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:86144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:12.160 [2024-12-01 14:54:45.017581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:12.160 [2024-12-01 14:54:45.017591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:86272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:12.160 [2024-12-01 14:54:45.017600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:12.160 [2024-12-01 14:54:45.017610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:80768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:12.160 [2024-12-01 14:54:45.017618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:12.160 [2024-12-01 14:54:45.017629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:81024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:12.160 [2024-12-01 14:54:45.017637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:12.160 [2024-12-01 14:54:45.017647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:81152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:12.160 [2024-12-01 14:54:45.017656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:12.160 [2024-12-01 14:54:45.017666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:86400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:12.160 [2024-12-01 14:54:45.017690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:12.160 [2024-12-01 14:54:45.017700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:86528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:12.160 [2024-12-01 14:54:45.017709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:12.160 [2024-12-01 14:54:45.017719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:86656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:12.160 [2024-12-01 14:54:45.017728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:12.160 [2024-12-01 14:54:45.017738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:86784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:12.160 [2024-12-01 14:54:45.017746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:12.160 [2024-12-01 14:54:45.017774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:86912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:12.160 [2024-12-01 14:54:45.017784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:12.160 [2024-12-01 14:54:45.017795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:87040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:12.160 [2024-12-01 14:54:45.017804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:12.160 [2024-12-01 14:54:45.017825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:81664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:12.160 [2024-12-01 14:54:45.017836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:12.160 [2024-12-01 14:54:45.017847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:87168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:12.160 [2024-12-01 14:54:45.017856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:12.160 [2024-12-01 14:54:45.017963] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0xf55dc0 was disconnected and freed. reset controller. 00:14:12.160 [2024-12-01 14:54:45.019129] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:14:12.160 task offset: 81920 on job bdev=Nvme0n1 fails 00:14:12.160 00:14:12.160 Latency(us) 00:14:12.160 [2024-12-01T14:54:45.275Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:12.160 [2024-12-01T14:54:45.275Z] Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:14:12.160 [2024-12-01T14:54:45.275Z] Job: Nvme0n1 ended in about 0.71 seconds with error 00:14:12.160 Verification LBA range: start 0x0 length 0x400 00:14:12.160 Nvme0n1 : 0.71 3773.81 235.86 90.59 0.00 16297.12 1966.08 22401.40 00:14:12.160 [2024-12-01T14:54:45.275Z] =================================================================================================================== 00:14:12.160 [2024-12-01T14:54:45.275Z] Total : 3773.81 235.86 90.59 0.00 16297.12 1966.08 22401.40 00:14:12.160 14:54:45 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:12.160 14:54:45 -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:14:12.160 14:54:45 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:12.160 14:54:45 -- common/autotest_common.sh@10 -- # set +x 00:14:12.160 [2024-12-01 14:54:45.021062] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:14:12.160 [2024-12-01 14:54:45.021120] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xeb1a70 (9): Bad file descriptor 00:14:12.160 14:54:45 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:12.160 14:54:45 -- target/host_management.sh@87 -- # sleep 1 00:14:12.160 [2024-12-01 14:54:45.031747] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:14:13.096 14:54:46 -- target/host_management.sh@91 -- # kill -9 82932 00:14:13.096 /home/vagrant/spdk_repo/spdk/test/nvmf/target/host_management.sh: line 91: kill: (82932) - No such process 00:14:13.096 14:54:46 -- target/host_management.sh@91 -- # true 00:14:13.096 14:54:46 -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:14:13.096 14:54:46 -- target/host_management.sh@100 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:14:13.096 14:54:46 -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:14:13.096 14:54:46 -- nvmf/common.sh@520 -- # config=() 00:14:13.096 14:54:46 -- nvmf/common.sh@520 -- # local subsystem config 00:14:13.096 14:54:46 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:14:13.096 14:54:46 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:14:13.096 { 00:14:13.096 "params": { 00:14:13.096 "name": "Nvme$subsystem", 00:14:13.096 "trtype": "$TEST_TRANSPORT", 00:14:13.096 "traddr": "$NVMF_FIRST_TARGET_IP", 00:14:13.096 "adrfam": "ipv4", 00:14:13.096 "trsvcid": "$NVMF_PORT", 00:14:13.096 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:14:13.096 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:14:13.096 "hdgst": ${hdgst:-false}, 00:14:13.096 "ddgst": ${ddgst:-false} 00:14:13.096 }, 00:14:13.096 "method": "bdev_nvme_attach_controller" 00:14:13.096 } 00:14:13.096 EOF 00:14:13.096 )") 00:14:13.096 14:54:46 -- nvmf/common.sh@542 -- # cat 00:14:13.096 14:54:46 -- nvmf/common.sh@544 -- # jq . 00:14:13.096 14:54:46 -- nvmf/common.sh@545 -- # IFS=, 00:14:13.096 14:54:46 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:14:13.096 "params": { 00:14:13.096 "name": "Nvme0", 00:14:13.096 "trtype": "tcp", 00:14:13.096 "traddr": "10.0.0.2", 00:14:13.096 "adrfam": "ipv4", 00:14:13.096 "trsvcid": "4420", 00:14:13.096 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:14:13.096 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:14:13.096 "hdgst": false, 00:14:13.096 "ddgst": false 00:14:13.096 }, 00:14:13.096 "method": "bdev_nvme_attach_controller" 00:14:13.096 }' 00:14:13.096 [2024-12-01 14:54:46.092204] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:14:13.096 [2024-12-01 14:54:46.092800] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82982 ] 00:14:13.355 [2024-12-01 14:54:46.234540] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:13.355 [2024-12-01 14:54:46.303558] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:13.614 Running I/O for 1 seconds... 00:14:14.552 00:14:14.552 Latency(us) 00:14:14.552 [2024-12-01T14:54:47.667Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:14.552 [2024-12-01T14:54:47.667Z] Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:14:14.552 Verification LBA range: start 0x0 length 0x400 00:14:14.552 Nvme0n1 : 1.01 3996.45 249.78 0.00 0.00 15738.88 1221.35 21686.46 00:14:14.552 [2024-12-01T14:54:47.667Z] =================================================================================================================== 00:14:14.552 [2024-12-01T14:54:47.667Z] Total : 3996.45 249.78 0.00 0.00 15738.88 1221.35 21686.46 00:14:14.812 14:54:47 -- target/host_management.sh@101 -- # stoptarget 00:14:14.812 14:54:47 -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:14:14.812 14:54:47 -- target/host_management.sh@37 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevperf.conf 00:14:14.812 14:54:47 -- target/host_management.sh@38 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpcs.txt 00:14:14.812 14:54:47 -- target/host_management.sh@40 -- # nvmftestfini 00:14:14.812 14:54:47 -- nvmf/common.sh@476 -- # nvmfcleanup 00:14:14.812 14:54:47 -- nvmf/common.sh@116 -- # sync 00:14:14.812 14:54:47 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:14:14.812 14:54:47 -- nvmf/common.sh@119 -- # set +e 00:14:14.812 14:54:47 -- nvmf/common.sh@120 -- # for i in {1..20} 00:14:14.812 14:54:47 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:14:14.812 rmmod nvme_tcp 00:14:14.812 rmmod nvme_fabrics 00:14:14.812 rmmod nvme_keyring 00:14:14.812 14:54:47 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:14:14.812 14:54:47 -- nvmf/common.sh@123 -- # set -e 00:14:14.812 14:54:47 -- nvmf/common.sh@124 -- # return 0 00:14:14.812 14:54:47 -- nvmf/common.sh@477 -- # '[' -n 82860 ']' 00:14:14.812 14:54:47 -- nvmf/common.sh@478 -- # killprocess 82860 00:14:14.812 14:54:47 -- common/autotest_common.sh@936 -- # '[' -z 82860 ']' 00:14:14.812 14:54:47 -- common/autotest_common.sh@940 -- # kill -0 82860 00:14:14.812 14:54:47 -- common/autotest_common.sh@941 -- # uname 00:14:14.812 14:54:47 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:14:14.812 14:54:47 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 82860 00:14:15.071 killing process with pid 82860 00:14:15.071 14:54:47 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:14:15.071 14:54:47 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:14:15.071 14:54:47 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 82860' 00:14:15.071 14:54:47 -- common/autotest_common.sh@955 -- # kill 82860 00:14:15.071 14:54:47 -- common/autotest_common.sh@960 -- # wait 82860 00:14:15.071 [2024-12-01 14:54:48.124395] app.c: 605:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:14:15.071 14:54:48 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:14:15.071 14:54:48 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:14:15.071 14:54:48 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:14:15.071 14:54:48 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:15.071 14:54:48 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:14:15.071 14:54:48 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:15.071 14:54:48 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:15.071 14:54:48 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:15.071 14:54:48 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:14:15.071 00:14:15.071 real 0m5.608s 00:14:15.071 user 0m23.868s 00:14:15.071 sys 0m1.375s 00:14:15.071 14:54:48 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:14:15.071 ************************************ 00:14:15.071 END TEST nvmf_host_management 00:14:15.071 ************************************ 00:14:15.071 14:54:48 -- common/autotest_common.sh@10 -- # set +x 00:14:15.330 14:54:48 -- target/host_management.sh@108 -- # trap - SIGINT SIGTERM EXIT 00:14:15.330 00:14:15.330 real 0m6.188s 00:14:15.330 user 0m24.077s 00:14:15.330 sys 0m1.625s 00:14:15.330 14:54:48 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:14:15.330 14:54:48 -- common/autotest_common.sh@10 -- # set +x 00:14:15.330 ************************************ 00:14:15.330 END TEST nvmf_host_management 00:14:15.330 ************************************ 00:14:15.330 14:54:48 -- nvmf/nvmf.sh@47 -- # run_test nvmf_lvol /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:14:15.330 14:54:48 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:14:15.330 14:54:48 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:14:15.330 14:54:48 -- common/autotest_common.sh@10 -- # set +x 00:14:15.331 ************************************ 00:14:15.331 START TEST nvmf_lvol 00:14:15.331 ************************************ 00:14:15.331 14:54:48 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:14:15.331 * Looking for test storage... 00:14:15.331 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:14:15.331 14:54:48 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:14:15.331 14:54:48 -- common/autotest_common.sh@1690 -- # lcov --version 00:14:15.331 14:54:48 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:14:15.331 14:54:48 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:14:15.331 14:54:48 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:14:15.331 14:54:48 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:14:15.331 14:54:48 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:14:15.331 14:54:48 -- scripts/common.sh@335 -- # IFS=.-: 00:14:15.331 14:54:48 -- scripts/common.sh@335 -- # read -ra ver1 00:14:15.331 14:54:48 -- scripts/common.sh@336 -- # IFS=.-: 00:14:15.331 14:54:48 -- scripts/common.sh@336 -- # read -ra ver2 00:14:15.331 14:54:48 -- scripts/common.sh@337 -- # local 'op=<' 00:14:15.331 14:54:48 -- scripts/common.sh@339 -- # ver1_l=2 00:14:15.331 14:54:48 -- scripts/common.sh@340 -- # ver2_l=1 00:14:15.331 14:54:48 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:14:15.331 14:54:48 -- scripts/common.sh@343 -- # case "$op" in 00:14:15.331 14:54:48 -- scripts/common.sh@344 -- # : 1 00:14:15.331 14:54:48 -- scripts/common.sh@363 -- # (( v = 0 )) 00:14:15.331 14:54:48 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:15.331 14:54:48 -- scripts/common.sh@364 -- # decimal 1 00:14:15.331 14:54:48 -- scripts/common.sh@352 -- # local d=1 00:14:15.331 14:54:48 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:15.331 14:54:48 -- scripts/common.sh@354 -- # echo 1 00:14:15.331 14:54:48 -- scripts/common.sh@364 -- # ver1[v]=1 00:14:15.590 14:54:48 -- scripts/common.sh@365 -- # decimal 2 00:14:15.590 14:54:48 -- scripts/common.sh@352 -- # local d=2 00:14:15.590 14:54:48 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:15.590 14:54:48 -- scripts/common.sh@354 -- # echo 2 00:14:15.590 14:54:48 -- scripts/common.sh@365 -- # ver2[v]=2 00:14:15.590 14:54:48 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:14:15.590 14:54:48 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:14:15.590 14:54:48 -- scripts/common.sh@367 -- # return 0 00:14:15.590 14:54:48 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:15.590 14:54:48 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:14:15.590 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:15.590 --rc genhtml_branch_coverage=1 00:14:15.590 --rc genhtml_function_coverage=1 00:14:15.590 --rc genhtml_legend=1 00:14:15.590 --rc geninfo_all_blocks=1 00:14:15.590 --rc geninfo_unexecuted_blocks=1 00:14:15.590 00:14:15.590 ' 00:14:15.590 14:54:48 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:14:15.590 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:15.590 --rc genhtml_branch_coverage=1 00:14:15.590 --rc genhtml_function_coverage=1 00:14:15.590 --rc genhtml_legend=1 00:14:15.590 --rc geninfo_all_blocks=1 00:14:15.590 --rc geninfo_unexecuted_blocks=1 00:14:15.590 00:14:15.590 ' 00:14:15.590 14:54:48 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:14:15.590 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:15.590 --rc genhtml_branch_coverage=1 00:14:15.590 --rc genhtml_function_coverage=1 00:14:15.590 --rc genhtml_legend=1 00:14:15.590 --rc geninfo_all_blocks=1 00:14:15.590 --rc geninfo_unexecuted_blocks=1 00:14:15.590 00:14:15.590 ' 00:14:15.590 14:54:48 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:14:15.590 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:15.590 --rc genhtml_branch_coverage=1 00:14:15.590 --rc genhtml_function_coverage=1 00:14:15.590 --rc genhtml_legend=1 00:14:15.590 --rc geninfo_all_blocks=1 00:14:15.590 --rc geninfo_unexecuted_blocks=1 00:14:15.590 00:14:15.590 ' 00:14:15.590 14:54:48 -- target/nvmf_lvol.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:14:15.590 14:54:48 -- nvmf/common.sh@7 -- # uname -s 00:14:15.590 14:54:48 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:15.590 14:54:48 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:15.590 14:54:48 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:15.590 14:54:48 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:15.590 14:54:48 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:15.590 14:54:48 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:15.590 14:54:48 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:15.590 14:54:48 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:15.590 14:54:48 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:15.590 14:54:48 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:15.590 14:54:48 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:2d843004-a791-47f3-8dd7-3d04462c368b 00:14:15.590 14:54:48 -- nvmf/common.sh@18 -- # NVME_HOSTID=2d843004-a791-47f3-8dd7-3d04462c368b 00:14:15.590 14:54:48 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:15.590 14:54:48 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:15.590 14:54:48 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:14:15.590 14:54:48 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:14:15.590 14:54:48 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:15.591 14:54:48 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:15.591 14:54:48 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:15.591 14:54:48 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:15.591 14:54:48 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:15.591 14:54:48 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:15.591 14:54:48 -- paths/export.sh@5 -- # export PATH 00:14:15.591 14:54:48 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:15.591 14:54:48 -- nvmf/common.sh@46 -- # : 0 00:14:15.591 14:54:48 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:14:15.591 14:54:48 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:14:15.591 14:54:48 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:14:15.591 14:54:48 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:15.591 14:54:48 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:15.591 14:54:48 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:14:15.591 14:54:48 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:14:15.591 14:54:48 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:14:15.591 14:54:48 -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:14:15.591 14:54:48 -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:14:15.591 14:54:48 -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:14:15.591 14:54:48 -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:14:15.591 14:54:48 -- target/nvmf_lvol.sh@16 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:14:15.591 14:54:48 -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:14:15.591 14:54:48 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:14:15.591 14:54:48 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:15.591 14:54:48 -- nvmf/common.sh@436 -- # prepare_net_devs 00:14:15.591 14:54:48 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:14:15.591 14:54:48 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:14:15.591 14:54:48 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:15.591 14:54:48 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:15.591 14:54:48 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:15.591 14:54:48 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:14:15.591 14:54:48 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:14:15.591 14:54:48 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:14:15.591 14:54:48 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:14:15.591 14:54:48 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:14:15.591 14:54:48 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:14:15.591 14:54:48 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:15.591 14:54:48 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:15.591 14:54:48 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:14:15.591 14:54:48 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:14:15.591 14:54:48 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:14:15.591 14:54:48 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:14:15.591 14:54:48 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:14:15.591 14:54:48 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:15.591 14:54:48 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:14:15.591 14:54:48 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:14:15.591 14:54:48 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:14:15.591 14:54:48 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:14:15.591 14:54:48 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:14:15.591 14:54:48 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:14:15.591 Cannot find device "nvmf_tgt_br" 00:14:15.591 14:54:48 -- nvmf/common.sh@154 -- # true 00:14:15.591 14:54:48 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:14:15.591 Cannot find device "nvmf_tgt_br2" 00:14:15.591 14:54:48 -- nvmf/common.sh@155 -- # true 00:14:15.591 14:54:48 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:14:15.591 14:54:48 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:14:15.591 Cannot find device "nvmf_tgt_br" 00:14:15.591 14:54:48 -- nvmf/common.sh@157 -- # true 00:14:15.591 14:54:48 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:14:15.591 Cannot find device "nvmf_tgt_br2" 00:14:15.591 14:54:48 -- nvmf/common.sh@158 -- # true 00:14:15.591 14:54:48 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:14:15.591 14:54:48 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:14:15.591 14:54:48 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:14:15.591 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:15.591 14:54:48 -- nvmf/common.sh@161 -- # true 00:14:15.591 14:54:48 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:14:15.591 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:15.591 14:54:48 -- nvmf/common.sh@162 -- # true 00:14:15.591 14:54:48 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:14:15.591 14:54:48 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:14:15.591 14:54:48 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:14:15.591 14:54:48 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:14:15.591 14:54:48 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:14:15.591 14:54:48 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:14:15.591 14:54:48 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:14:15.591 14:54:48 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:14:15.591 14:54:48 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:14:15.591 14:54:48 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:14:15.591 14:54:48 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:14:15.591 14:54:48 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:14:15.591 14:54:48 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:14:15.591 14:54:48 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:14:15.591 14:54:48 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:14:15.591 14:54:48 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:14:15.850 14:54:48 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:14:15.850 14:54:48 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:14:15.850 14:54:48 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:14:15.850 14:54:48 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:14:15.850 14:54:48 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:14:15.850 14:54:48 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:14:15.850 14:54:48 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:14:15.850 14:54:48 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:14:15.850 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:15.850 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.089 ms 00:14:15.850 00:14:15.850 --- 10.0.0.2 ping statistics --- 00:14:15.850 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:15.850 rtt min/avg/max/mdev = 0.089/0.089/0.089/0.000 ms 00:14:15.850 14:54:48 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:14:15.850 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:14:15.850 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.034 ms 00:14:15.850 00:14:15.850 --- 10.0.0.3 ping statistics --- 00:14:15.850 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:15.850 rtt min/avg/max/mdev = 0.034/0.034/0.034/0.000 ms 00:14:15.850 14:54:48 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:14:15.850 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:15.850 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.031 ms 00:14:15.850 00:14:15.850 --- 10.0.0.1 ping statistics --- 00:14:15.850 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:15.850 rtt min/avg/max/mdev = 0.031/0.031/0.031/0.000 ms 00:14:15.850 14:54:48 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:15.850 14:54:48 -- nvmf/common.sh@421 -- # return 0 00:14:15.850 14:54:48 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:14:15.850 14:54:48 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:15.850 14:54:48 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:14:15.850 14:54:48 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:14:15.850 14:54:48 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:15.850 14:54:48 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:14:15.850 14:54:48 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:14:15.850 14:54:48 -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:14:15.850 14:54:48 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:14:15.850 14:54:48 -- common/autotest_common.sh@722 -- # xtrace_disable 00:14:15.850 14:54:48 -- common/autotest_common.sh@10 -- # set +x 00:14:15.851 14:54:48 -- nvmf/common.sh@469 -- # nvmfpid=83214 00:14:15.851 14:54:48 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:14:15.851 14:54:48 -- nvmf/common.sh@470 -- # waitforlisten 83214 00:14:15.851 14:54:48 -- common/autotest_common.sh@829 -- # '[' -z 83214 ']' 00:14:15.851 14:54:48 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:15.851 14:54:48 -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:15.851 14:54:48 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:15.851 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:15.851 14:54:48 -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:15.851 14:54:48 -- common/autotest_common.sh@10 -- # set +x 00:14:15.851 [2024-12-01 14:54:48.853428] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:14:15.851 [2024-12-01 14:54:48.853516] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:16.109 [2024-12-01 14:54:48.992252] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:14:16.109 [2024-12-01 14:54:49.061094] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:14:16.109 [2024-12-01 14:54:49.061246] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:16.109 [2024-12-01 14:54:49.061258] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:16.109 [2024-12-01 14:54:49.061265] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:16.109 [2024-12-01 14:54:49.061421] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:14:16.110 [2024-12-01 14:54:49.061917] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:14:16.110 [2024-12-01 14:54:49.061933] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:16.681 14:54:49 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:16.681 14:54:49 -- common/autotest_common.sh@862 -- # return 0 00:14:16.681 14:54:49 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:14:16.681 14:54:49 -- common/autotest_common.sh@728 -- # xtrace_disable 00:14:16.682 14:54:49 -- common/autotest_common.sh@10 -- # set +x 00:14:16.940 14:54:49 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:16.940 14:54:49 -- target/nvmf_lvol.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:14:17.199 [2024-12-01 14:54:50.087110] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:17.199 14:54:50 -- target/nvmf_lvol.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:14:17.458 14:54:50 -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:14:17.458 14:54:50 -- target/nvmf_lvol.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:14:17.717 14:54:50 -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:14:17.717 14:54:50 -- target/nvmf_lvol.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:14:17.975 14:54:51 -- target/nvmf_lvol.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:14:18.233 14:54:51 -- target/nvmf_lvol.sh@29 -- # lvs=98a8f6e3-f60f-4975-973c-eeef2613f8ab 00:14:18.233 14:54:51 -- target/nvmf_lvol.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 98a8f6e3-f60f-4975-973c-eeef2613f8ab lvol 20 00:14:18.491 14:54:51 -- target/nvmf_lvol.sh@32 -- # lvol=8ae33c4e-3a17-4ec1-bc28-802d35d0eef2 00:14:18.491 14:54:51 -- target/nvmf_lvol.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:14:18.748 14:54:51 -- target/nvmf_lvol.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 8ae33c4e-3a17-4ec1-bc28-802d35d0eef2 00:14:19.006 14:54:52 -- target/nvmf_lvol.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:14:19.262 [2024-12-01 14:54:52.218596] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:19.262 14:54:52 -- target/nvmf_lvol.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:14:19.520 14:54:52 -- target/nvmf_lvol.sh@42 -- # perf_pid=83360 00:14:19.520 14:54:52 -- target/nvmf_lvol.sh@41 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:14:19.520 14:54:52 -- target/nvmf_lvol.sh@44 -- # sleep 1 00:14:20.454 14:54:53 -- target/nvmf_lvol.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_snapshot 8ae33c4e-3a17-4ec1-bc28-802d35d0eef2 MY_SNAPSHOT 00:14:21.019 14:54:53 -- target/nvmf_lvol.sh@47 -- # snapshot=49e6ca27-67b1-4551-b8ae-5f0e96b9d958 00:14:21.019 14:54:53 -- target/nvmf_lvol.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_resize 8ae33c4e-3a17-4ec1-bc28-802d35d0eef2 30 00:14:21.276 14:54:54 -- target/nvmf_lvol.sh@49 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_clone 49e6ca27-67b1-4551-b8ae-5f0e96b9d958 MY_CLONE 00:14:21.534 14:54:54 -- target/nvmf_lvol.sh@49 -- # clone=03fab1a0-d96c-4df9-a37d-1d49b89f57c4 00:14:21.534 14:54:54 -- target/nvmf_lvol.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_inflate 03fab1a0-d96c-4df9-a37d-1d49b89f57c4 00:14:22.469 14:54:55 -- target/nvmf_lvol.sh@53 -- # wait 83360 00:14:30.588 Initializing NVMe Controllers 00:14:30.588 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:14:30.588 Controller IO queue size 128, less than required. 00:14:30.588 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:14:30.588 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:14:30.588 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:14:30.588 Initialization complete. Launching workers. 00:14:30.588 ======================================================== 00:14:30.588 Latency(us) 00:14:30.588 Device Information : IOPS MiB/s Average min max 00:14:30.588 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 7933.60 30.99 16145.75 1727.55 73809.02 00:14:30.588 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 7246.90 28.31 17678.87 1576.26 72887.15 00:14:30.588 ======================================================== 00:14:30.588 Total : 15180.50 59.30 16877.63 1576.26 73809.02 00:14:30.588 00:14:30.588 14:55:02 -- target/nvmf_lvol.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:14:30.588 14:55:03 -- target/nvmf_lvol.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete 8ae33c4e-3a17-4ec1-bc28-802d35d0eef2 00:14:30.588 14:55:03 -- target/nvmf_lvol.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 98a8f6e3-f60f-4975-973c-eeef2613f8ab 00:14:30.588 14:55:03 -- target/nvmf_lvol.sh@60 -- # rm -f 00:14:30.588 14:55:03 -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:14:30.588 14:55:03 -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:14:30.588 14:55:03 -- nvmf/common.sh@476 -- # nvmfcleanup 00:14:30.588 14:55:03 -- nvmf/common.sh@116 -- # sync 00:14:30.588 14:55:03 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:14:30.588 14:55:03 -- nvmf/common.sh@119 -- # set +e 00:14:30.588 14:55:03 -- nvmf/common.sh@120 -- # for i in {1..20} 00:14:30.588 14:55:03 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:14:30.588 rmmod nvme_tcp 00:14:30.588 rmmod nvme_fabrics 00:14:30.588 rmmod nvme_keyring 00:14:30.588 14:55:03 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:14:30.588 14:55:03 -- nvmf/common.sh@123 -- # set -e 00:14:30.588 14:55:03 -- nvmf/common.sh@124 -- # return 0 00:14:30.588 14:55:03 -- nvmf/common.sh@477 -- # '[' -n 83214 ']' 00:14:30.588 14:55:03 -- nvmf/common.sh@478 -- # killprocess 83214 00:14:30.588 14:55:03 -- common/autotest_common.sh@936 -- # '[' -z 83214 ']' 00:14:30.588 14:55:03 -- common/autotest_common.sh@940 -- # kill -0 83214 00:14:30.588 14:55:03 -- common/autotest_common.sh@941 -- # uname 00:14:30.588 14:55:03 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:14:30.588 14:55:03 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 83214 00:14:30.848 killing process with pid 83214 00:14:30.848 14:55:03 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:14:30.848 14:55:03 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:14:30.848 14:55:03 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 83214' 00:14:30.848 14:55:03 -- common/autotest_common.sh@955 -- # kill 83214 00:14:30.848 14:55:03 -- common/autotest_common.sh@960 -- # wait 83214 00:14:31.107 14:55:04 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:14:31.107 14:55:04 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:14:31.107 14:55:04 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:14:31.107 14:55:04 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:31.107 14:55:04 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:14:31.107 14:55:04 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:31.107 14:55:04 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:31.107 14:55:04 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:31.107 14:55:04 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:14:31.107 ************************************ 00:14:31.107 END TEST nvmf_lvol 00:14:31.107 ************************************ 00:14:31.107 00:14:31.107 real 0m15.811s 00:14:31.107 user 1m6.044s 00:14:31.107 sys 0m3.822s 00:14:31.107 14:55:04 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:14:31.107 14:55:04 -- common/autotest_common.sh@10 -- # set +x 00:14:31.107 14:55:04 -- nvmf/nvmf.sh@48 -- # run_test nvmf_lvs_grow /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:14:31.107 14:55:04 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:14:31.107 14:55:04 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:14:31.107 14:55:04 -- common/autotest_common.sh@10 -- # set +x 00:14:31.107 ************************************ 00:14:31.107 START TEST nvmf_lvs_grow 00:14:31.107 ************************************ 00:14:31.107 14:55:04 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:14:31.107 * Looking for test storage... 00:14:31.107 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:14:31.107 14:55:04 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:14:31.107 14:55:04 -- common/autotest_common.sh@1690 -- # lcov --version 00:14:31.107 14:55:04 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:14:31.367 14:55:04 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:14:31.367 14:55:04 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:14:31.367 14:55:04 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:14:31.367 14:55:04 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:14:31.367 14:55:04 -- scripts/common.sh@335 -- # IFS=.-: 00:14:31.367 14:55:04 -- scripts/common.sh@335 -- # read -ra ver1 00:14:31.367 14:55:04 -- scripts/common.sh@336 -- # IFS=.-: 00:14:31.367 14:55:04 -- scripts/common.sh@336 -- # read -ra ver2 00:14:31.367 14:55:04 -- scripts/common.sh@337 -- # local 'op=<' 00:14:31.367 14:55:04 -- scripts/common.sh@339 -- # ver1_l=2 00:14:31.367 14:55:04 -- scripts/common.sh@340 -- # ver2_l=1 00:14:31.367 14:55:04 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:14:31.367 14:55:04 -- scripts/common.sh@343 -- # case "$op" in 00:14:31.367 14:55:04 -- scripts/common.sh@344 -- # : 1 00:14:31.367 14:55:04 -- scripts/common.sh@363 -- # (( v = 0 )) 00:14:31.367 14:55:04 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:31.367 14:55:04 -- scripts/common.sh@364 -- # decimal 1 00:14:31.367 14:55:04 -- scripts/common.sh@352 -- # local d=1 00:14:31.367 14:55:04 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:31.367 14:55:04 -- scripts/common.sh@354 -- # echo 1 00:14:31.367 14:55:04 -- scripts/common.sh@364 -- # ver1[v]=1 00:14:31.367 14:55:04 -- scripts/common.sh@365 -- # decimal 2 00:14:31.367 14:55:04 -- scripts/common.sh@352 -- # local d=2 00:14:31.367 14:55:04 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:31.367 14:55:04 -- scripts/common.sh@354 -- # echo 2 00:14:31.367 14:55:04 -- scripts/common.sh@365 -- # ver2[v]=2 00:14:31.367 14:55:04 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:14:31.367 14:55:04 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:14:31.367 14:55:04 -- scripts/common.sh@367 -- # return 0 00:14:31.367 14:55:04 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:31.367 14:55:04 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:14:31.367 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:31.367 --rc genhtml_branch_coverage=1 00:14:31.367 --rc genhtml_function_coverage=1 00:14:31.367 --rc genhtml_legend=1 00:14:31.367 --rc geninfo_all_blocks=1 00:14:31.367 --rc geninfo_unexecuted_blocks=1 00:14:31.367 00:14:31.367 ' 00:14:31.367 14:55:04 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:14:31.367 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:31.367 --rc genhtml_branch_coverage=1 00:14:31.367 --rc genhtml_function_coverage=1 00:14:31.367 --rc genhtml_legend=1 00:14:31.367 --rc geninfo_all_blocks=1 00:14:31.367 --rc geninfo_unexecuted_blocks=1 00:14:31.367 00:14:31.367 ' 00:14:31.367 14:55:04 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:14:31.367 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:31.367 --rc genhtml_branch_coverage=1 00:14:31.367 --rc genhtml_function_coverage=1 00:14:31.367 --rc genhtml_legend=1 00:14:31.367 --rc geninfo_all_blocks=1 00:14:31.367 --rc geninfo_unexecuted_blocks=1 00:14:31.367 00:14:31.367 ' 00:14:31.367 14:55:04 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:14:31.367 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:31.367 --rc genhtml_branch_coverage=1 00:14:31.367 --rc genhtml_function_coverage=1 00:14:31.367 --rc genhtml_legend=1 00:14:31.367 --rc geninfo_all_blocks=1 00:14:31.367 --rc geninfo_unexecuted_blocks=1 00:14:31.367 00:14:31.367 ' 00:14:31.367 14:55:04 -- target/nvmf_lvs_grow.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:14:31.367 14:55:04 -- nvmf/common.sh@7 -- # uname -s 00:14:31.367 14:55:04 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:31.367 14:55:04 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:31.367 14:55:04 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:31.367 14:55:04 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:31.367 14:55:04 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:31.367 14:55:04 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:31.367 14:55:04 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:31.367 14:55:04 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:31.367 14:55:04 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:31.367 14:55:04 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:31.367 14:55:04 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:2d843004-a791-47f3-8dd7-3d04462c368b 00:14:31.367 14:55:04 -- nvmf/common.sh@18 -- # NVME_HOSTID=2d843004-a791-47f3-8dd7-3d04462c368b 00:14:31.367 14:55:04 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:31.367 14:55:04 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:31.367 14:55:04 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:14:31.367 14:55:04 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:14:31.367 14:55:04 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:31.367 14:55:04 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:31.367 14:55:04 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:31.367 14:55:04 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:31.367 14:55:04 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:31.367 14:55:04 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:31.367 14:55:04 -- paths/export.sh@5 -- # export PATH 00:14:31.367 14:55:04 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:31.367 14:55:04 -- nvmf/common.sh@46 -- # : 0 00:14:31.367 14:55:04 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:14:31.367 14:55:04 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:14:31.367 14:55:04 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:14:31.368 14:55:04 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:31.368 14:55:04 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:31.368 14:55:04 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:14:31.368 14:55:04 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:14:31.368 14:55:04 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:14:31.368 14:55:04 -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:14:31.368 14:55:04 -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:14:31.368 14:55:04 -- target/nvmf_lvs_grow.sh@97 -- # nvmftestinit 00:14:31.368 14:55:04 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:14:31.368 14:55:04 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:31.368 14:55:04 -- nvmf/common.sh@436 -- # prepare_net_devs 00:14:31.368 14:55:04 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:14:31.368 14:55:04 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:14:31.368 14:55:04 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:31.368 14:55:04 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:31.368 14:55:04 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:31.368 14:55:04 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:14:31.368 14:55:04 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:14:31.368 14:55:04 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:14:31.368 14:55:04 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:14:31.368 14:55:04 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:14:31.368 14:55:04 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:14:31.368 14:55:04 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:31.368 14:55:04 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:31.368 14:55:04 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:14:31.368 14:55:04 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:14:31.368 14:55:04 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:14:31.368 14:55:04 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:14:31.368 14:55:04 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:14:31.368 14:55:04 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:31.368 14:55:04 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:14:31.368 14:55:04 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:14:31.368 14:55:04 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:14:31.368 14:55:04 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:14:31.368 14:55:04 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:14:31.368 14:55:04 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:14:31.368 Cannot find device "nvmf_tgt_br" 00:14:31.368 14:55:04 -- nvmf/common.sh@154 -- # true 00:14:31.368 14:55:04 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:14:31.368 Cannot find device "nvmf_tgt_br2" 00:14:31.368 14:55:04 -- nvmf/common.sh@155 -- # true 00:14:31.368 14:55:04 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:14:31.368 14:55:04 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:14:31.368 Cannot find device "nvmf_tgt_br" 00:14:31.368 14:55:04 -- nvmf/common.sh@157 -- # true 00:14:31.368 14:55:04 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:14:31.368 Cannot find device "nvmf_tgt_br2" 00:14:31.368 14:55:04 -- nvmf/common.sh@158 -- # true 00:14:31.368 14:55:04 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:14:31.368 14:55:04 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:14:31.627 14:55:04 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:14:31.627 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:31.627 14:55:04 -- nvmf/common.sh@161 -- # true 00:14:31.627 14:55:04 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:14:31.627 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:31.627 14:55:04 -- nvmf/common.sh@162 -- # true 00:14:31.627 14:55:04 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:14:31.627 14:55:04 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:14:31.627 14:55:04 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:14:31.627 14:55:04 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:14:31.627 14:55:04 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:14:31.627 14:55:04 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:14:31.627 14:55:04 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:14:31.627 14:55:04 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:14:31.627 14:55:04 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:14:31.627 14:55:04 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:14:31.627 14:55:04 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:14:31.627 14:55:04 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:14:31.627 14:55:04 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:14:31.627 14:55:04 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:14:31.627 14:55:04 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:14:31.627 14:55:04 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:14:31.627 14:55:04 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:14:31.627 14:55:04 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:14:31.627 14:55:04 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:14:31.627 14:55:04 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:14:31.627 14:55:04 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:14:31.627 14:55:04 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:14:31.627 14:55:04 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:14:31.627 14:55:04 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:14:31.627 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:31.627 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.080 ms 00:14:31.627 00:14:31.627 --- 10.0.0.2 ping statistics --- 00:14:31.627 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:31.627 rtt min/avg/max/mdev = 0.080/0.080/0.080/0.000 ms 00:14:31.627 14:55:04 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:14:31.627 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:14:31.627 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.051 ms 00:14:31.627 00:14:31.627 --- 10.0.0.3 ping statistics --- 00:14:31.627 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:31.627 rtt min/avg/max/mdev = 0.051/0.051/0.051/0.000 ms 00:14:31.627 14:55:04 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:14:31.627 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:31.627 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.040 ms 00:14:31.627 00:14:31.627 --- 10.0.0.1 ping statistics --- 00:14:31.627 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:31.627 rtt min/avg/max/mdev = 0.040/0.040/0.040/0.000 ms 00:14:31.627 14:55:04 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:31.627 14:55:04 -- nvmf/common.sh@421 -- # return 0 00:14:31.627 14:55:04 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:14:31.627 14:55:04 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:31.627 14:55:04 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:14:31.627 14:55:04 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:14:31.627 14:55:04 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:31.627 14:55:04 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:14:31.627 14:55:04 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:14:31.627 14:55:04 -- target/nvmf_lvs_grow.sh@98 -- # nvmfappstart -m 0x1 00:14:31.627 14:55:04 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:14:31.627 14:55:04 -- common/autotest_common.sh@722 -- # xtrace_disable 00:14:31.627 14:55:04 -- common/autotest_common.sh@10 -- # set +x 00:14:31.627 14:55:04 -- nvmf/common.sh@469 -- # nvmfpid=83735 00:14:31.627 14:55:04 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:14:31.627 14:55:04 -- nvmf/common.sh@470 -- # waitforlisten 83735 00:14:31.627 14:55:04 -- common/autotest_common.sh@829 -- # '[' -z 83735 ']' 00:14:31.627 14:55:04 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:31.627 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:31.627 14:55:04 -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:31.627 14:55:04 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:31.627 14:55:04 -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:31.627 14:55:04 -- common/autotest_common.sh@10 -- # set +x 00:14:31.886 [2024-12-01 14:55:04.753759] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:14:31.886 [2024-12-01 14:55:04.753953] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:31.886 [2024-12-01 14:55:04.889608] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:31.886 [2024-12-01 14:55:04.996028] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:14:31.886 [2024-12-01 14:55:04.996410] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:31.886 [2024-12-01 14:55:04.996438] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:31.886 [2024-12-01 14:55:04.996450] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:31.886 [2024-12-01 14:55:04.996488] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:32.821 14:55:05 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:32.821 14:55:05 -- common/autotest_common.sh@862 -- # return 0 00:14:32.821 14:55:05 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:14:32.821 14:55:05 -- common/autotest_common.sh@728 -- # xtrace_disable 00:14:32.821 14:55:05 -- common/autotest_common.sh@10 -- # set +x 00:14:32.821 14:55:05 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:32.821 14:55:05 -- target/nvmf_lvs_grow.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:14:33.092 [2024-12-01 14:55:06.063009] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:33.092 14:55:06 -- target/nvmf_lvs_grow.sh@101 -- # run_test lvs_grow_clean lvs_grow 00:14:33.092 14:55:06 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:14:33.092 14:55:06 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:14:33.092 14:55:06 -- common/autotest_common.sh@10 -- # set +x 00:14:33.092 ************************************ 00:14:33.092 START TEST lvs_grow_clean 00:14:33.092 ************************************ 00:14:33.092 14:55:06 -- common/autotest_common.sh@1114 -- # lvs_grow 00:14:33.093 14:55:06 -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:14:33.093 14:55:06 -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:14:33.093 14:55:06 -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:14:33.093 14:55:06 -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:14:33.093 14:55:06 -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:14:33.093 14:55:06 -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:14:33.093 14:55:06 -- target/nvmf_lvs_grow.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:14:33.093 14:55:06 -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:14:33.093 14:55:06 -- target/nvmf_lvs_grow.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:14:33.371 14:55:06 -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:14:33.371 14:55:06 -- target/nvmf_lvs_grow.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:14:33.642 14:55:06 -- target/nvmf_lvs_grow.sh@28 -- # lvs=4d840b46-fe28-44b9-931a-3304553e2203 00:14:33.642 14:55:06 -- target/nvmf_lvs_grow.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 4d840b46-fe28-44b9-931a-3304553e2203 00:14:33.642 14:55:06 -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:14:33.901 14:55:07 -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:14:33.901 14:55:07 -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:14:33.901 14:55:07 -- target/nvmf_lvs_grow.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 4d840b46-fe28-44b9-931a-3304553e2203 lvol 150 00:14:34.160 14:55:07 -- target/nvmf_lvs_grow.sh@33 -- # lvol=ad2e180b-f641-41c0-90b9-8c29b8de72a5 00:14:34.160 14:55:07 -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:14:34.160 14:55:07 -- target/nvmf_lvs_grow.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:14:34.419 [2024-12-01 14:55:07.493697] bdev_aio.c: 959:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:14:34.419 [2024-12-01 14:55:07.493798] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:14:34.419 true 00:14:34.419 14:55:07 -- target/nvmf_lvs_grow.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 4d840b46-fe28-44b9-931a-3304553e2203 00:14:34.419 14:55:07 -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:14:34.678 14:55:07 -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:14:34.678 14:55:07 -- target/nvmf_lvs_grow.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:14:34.937 14:55:07 -- target/nvmf_lvs_grow.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 ad2e180b-f641-41c0-90b9-8c29b8de72a5 00:14:35.195 14:55:08 -- target/nvmf_lvs_grow.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:14:35.454 [2024-12-01 14:55:08.346275] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:35.454 14:55:08 -- target/nvmf_lvs_grow.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:14:35.713 14:55:08 -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=83902 00:14:35.713 14:55:08 -- target/nvmf_lvs_grow.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:14:35.713 14:55:08 -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:14:35.713 14:55:08 -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 83902 /var/tmp/bdevperf.sock 00:14:35.713 14:55:08 -- common/autotest_common.sh@829 -- # '[' -z 83902 ']' 00:14:35.713 14:55:08 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:14:35.713 14:55:08 -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:35.713 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:14:35.713 14:55:08 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:14:35.713 14:55:08 -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:35.713 14:55:08 -- common/autotest_common.sh@10 -- # set +x 00:14:35.713 [2024-12-01 14:55:08.714512] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:14:35.713 [2024-12-01 14:55:08.714605] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83902 ] 00:14:35.972 [2024-12-01 14:55:08.846884] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:35.972 [2024-12-01 14:55:08.900146] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:14:36.540 14:55:09 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:36.540 14:55:09 -- common/autotest_common.sh@862 -- # return 0 00:14:36.540 14:55:09 -- target/nvmf_lvs_grow.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:14:37.106 Nvme0n1 00:14:37.106 14:55:09 -- target/nvmf_lvs_grow.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:14:37.106 [ 00:14:37.106 { 00:14:37.106 "aliases": [ 00:14:37.106 "ad2e180b-f641-41c0-90b9-8c29b8de72a5" 00:14:37.106 ], 00:14:37.106 "assigned_rate_limits": { 00:14:37.106 "r_mbytes_per_sec": 0, 00:14:37.106 "rw_ios_per_sec": 0, 00:14:37.106 "rw_mbytes_per_sec": 0, 00:14:37.106 "w_mbytes_per_sec": 0 00:14:37.106 }, 00:14:37.106 "block_size": 4096, 00:14:37.106 "claimed": false, 00:14:37.106 "driver_specific": { 00:14:37.106 "mp_policy": "active_passive", 00:14:37.106 "nvme": [ 00:14:37.106 { 00:14:37.106 "ctrlr_data": { 00:14:37.106 "ana_reporting": false, 00:14:37.106 "cntlid": 1, 00:14:37.106 "firmware_revision": "24.01.1", 00:14:37.106 "model_number": "SPDK bdev Controller", 00:14:37.106 "multi_ctrlr": true, 00:14:37.106 "oacs": { 00:14:37.106 "firmware": 0, 00:14:37.106 "format": 0, 00:14:37.106 "ns_manage": 0, 00:14:37.106 "security": 0 00:14:37.106 }, 00:14:37.106 "serial_number": "SPDK0", 00:14:37.106 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:14:37.106 "vendor_id": "0x8086" 00:14:37.106 }, 00:14:37.106 "ns_data": { 00:14:37.106 "can_share": true, 00:14:37.106 "id": 1 00:14:37.106 }, 00:14:37.106 "trid": { 00:14:37.106 "adrfam": "IPv4", 00:14:37.106 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:14:37.106 "traddr": "10.0.0.2", 00:14:37.106 "trsvcid": "4420", 00:14:37.106 "trtype": "TCP" 00:14:37.106 }, 00:14:37.106 "vs": { 00:14:37.106 "nvme_version": "1.3" 00:14:37.106 } 00:14:37.106 } 00:14:37.106 ] 00:14:37.106 }, 00:14:37.106 "name": "Nvme0n1", 00:14:37.106 "num_blocks": 38912, 00:14:37.106 "product_name": "NVMe disk", 00:14:37.106 "supported_io_types": { 00:14:37.106 "abort": true, 00:14:37.106 "compare": true, 00:14:37.106 "compare_and_write": true, 00:14:37.106 "flush": true, 00:14:37.107 "nvme_admin": true, 00:14:37.107 "nvme_io": true, 00:14:37.107 "read": true, 00:14:37.107 "reset": true, 00:14:37.107 "unmap": true, 00:14:37.107 "write": true, 00:14:37.107 "write_zeroes": true 00:14:37.107 }, 00:14:37.107 "uuid": "ad2e180b-f641-41c0-90b9-8c29b8de72a5", 00:14:37.107 "zoned": false 00:14:37.107 } 00:14:37.107 ] 00:14:37.107 14:55:10 -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=83950 00:14:37.107 14:55:10 -- target/nvmf_lvs_grow.sh@55 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:14:37.107 14:55:10 -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:14:37.365 Running I/O for 10 seconds... 00:14:38.302 Latency(us) 00:14:38.302 [2024-12-01T14:55:11.417Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:38.302 [2024-12-01T14:55:11.417Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:38.302 Nvme0n1 : 1.00 7565.00 29.55 0.00 0.00 0.00 0.00 0.00 00:14:38.302 [2024-12-01T14:55:11.417Z] =================================================================================================================== 00:14:38.302 [2024-12-01T14:55:11.417Z] Total : 7565.00 29.55 0.00 0.00 0.00 0.00 0.00 00:14:38.302 00:14:39.255 14:55:12 -- target/nvmf_lvs_grow.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 4d840b46-fe28-44b9-931a-3304553e2203 00:14:39.255 [2024-12-01T14:55:12.370Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:39.255 Nvme0n1 : 2.00 7555.00 29.51 0.00 0.00 0.00 0.00 0.00 00:14:39.255 [2024-12-01T14:55:12.370Z] =================================================================================================================== 00:14:39.255 [2024-12-01T14:55:12.370Z] Total : 7555.00 29.51 0.00 0.00 0.00 0.00 0.00 00:14:39.255 00:14:39.514 true 00:14:39.514 14:55:12 -- target/nvmf_lvs_grow.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 4d840b46-fe28-44b9-931a-3304553e2203 00:14:39.514 14:55:12 -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:14:39.772 14:55:12 -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:14:39.772 14:55:12 -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:14:39.772 14:55:12 -- target/nvmf_lvs_grow.sh@65 -- # wait 83950 00:14:40.340 [2024-12-01T14:55:13.455Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:40.340 Nvme0n1 : 3.00 7436.33 29.05 0.00 0.00 0.00 0.00 0.00 00:14:40.340 [2024-12-01T14:55:13.455Z] =================================================================================================================== 00:14:40.340 [2024-12-01T14:55:13.455Z] Total : 7436.33 29.05 0.00 0.00 0.00 0.00 0.00 00:14:40.340 00:14:41.276 [2024-12-01T14:55:14.391Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:41.276 Nvme0n1 : 4.00 7419.50 28.98 0.00 0.00 0.00 0.00 0.00 00:14:41.276 [2024-12-01T14:55:14.391Z] =================================================================================================================== 00:14:41.276 [2024-12-01T14:55:14.391Z] Total : 7419.50 28.98 0.00 0.00 0.00 0.00 0.00 00:14:41.276 00:14:42.211 [2024-12-01T14:55:15.326Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:42.211 Nvme0n1 : 5.00 7398.60 28.90 0.00 0.00 0.00 0.00 0.00 00:14:42.211 [2024-12-01T14:55:15.326Z] =================================================================================================================== 00:14:42.211 [2024-12-01T14:55:15.326Z] Total : 7398.60 28.90 0.00 0.00 0.00 0.00 0.00 00:14:42.211 00:14:43.587 [2024-12-01T14:55:16.702Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:43.587 Nvme0n1 : 6.00 7412.00 28.95 0.00 0.00 0.00 0.00 0.00 00:14:43.587 [2024-12-01T14:55:16.702Z] =================================================================================================================== 00:14:43.587 [2024-12-01T14:55:16.702Z] Total : 7412.00 28.95 0.00 0.00 0.00 0.00 0.00 00:14:43.587 00:14:44.520 [2024-12-01T14:55:17.635Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:44.520 Nvme0n1 : 7.00 7395.29 28.89 0.00 0.00 0.00 0.00 0.00 00:14:44.520 [2024-12-01T14:55:17.635Z] =================================================================================================================== 00:14:44.520 [2024-12-01T14:55:17.635Z] Total : 7395.29 28.89 0.00 0.00 0.00 0.00 0.00 00:14:44.520 00:14:45.455 [2024-12-01T14:55:18.571Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:45.456 Nvme0n1 : 8.00 7382.75 28.84 0.00 0.00 0.00 0.00 0.00 00:14:45.456 [2024-12-01T14:55:18.571Z] =================================================================================================================== 00:14:45.456 [2024-12-01T14:55:18.571Z] Total : 7382.75 28.84 0.00 0.00 0.00 0.00 0.00 00:14:45.456 00:14:46.391 [2024-12-01T14:55:19.507Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:46.392 Nvme0n1 : 9.00 7343.78 28.69 0.00 0.00 0.00 0.00 0.00 00:14:46.392 [2024-12-01T14:55:19.507Z] =================================================================================================================== 00:14:46.392 [2024-12-01T14:55:19.507Z] Total : 7343.78 28.69 0.00 0.00 0.00 0.00 0.00 00:14:46.392 00:14:47.327 [2024-12-01T14:55:20.442Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:47.327 Nvme0n1 : 10.00 7293.70 28.49 0.00 0.00 0.00 0.00 0.00 00:14:47.327 [2024-12-01T14:55:20.442Z] =================================================================================================================== 00:14:47.327 [2024-12-01T14:55:20.442Z] Total : 7293.70 28.49 0.00 0.00 0.00 0.00 0.00 00:14:47.327 00:14:47.327 00:14:47.327 Latency(us) 00:14:47.327 [2024-12-01T14:55:20.442Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:47.327 [2024-12-01T14:55:20.442Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:47.327 Nvme0n1 : 10.01 7298.09 28.51 0.00 0.00 17529.47 7864.32 37653.41 00:14:47.327 [2024-12-01T14:55:20.442Z] =================================================================================================================== 00:14:47.327 [2024-12-01T14:55:20.442Z] Total : 7298.09 28.51 0.00 0.00 17529.47 7864.32 37653.41 00:14:47.327 0 00:14:47.327 14:55:20 -- target/nvmf_lvs_grow.sh@66 -- # killprocess 83902 00:14:47.327 14:55:20 -- common/autotest_common.sh@936 -- # '[' -z 83902 ']' 00:14:47.327 14:55:20 -- common/autotest_common.sh@940 -- # kill -0 83902 00:14:47.327 14:55:20 -- common/autotest_common.sh@941 -- # uname 00:14:47.327 14:55:20 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:14:47.327 14:55:20 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 83902 00:14:47.327 killing process with pid 83902 00:14:47.327 Received shutdown signal, test time was about 10.000000 seconds 00:14:47.327 00:14:47.327 Latency(us) 00:14:47.327 [2024-12-01T14:55:20.442Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:47.327 [2024-12-01T14:55:20.442Z] =================================================================================================================== 00:14:47.327 [2024-12-01T14:55:20.442Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:14:47.327 14:55:20 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:14:47.328 14:55:20 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:14:47.328 14:55:20 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 83902' 00:14:47.328 14:55:20 -- common/autotest_common.sh@955 -- # kill 83902 00:14:47.328 14:55:20 -- common/autotest_common.sh@960 -- # wait 83902 00:14:47.587 14:55:20 -- target/nvmf_lvs_grow.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:14:47.846 14:55:20 -- target/nvmf_lvs_grow.sh@69 -- # jq -r '.[0].free_clusters' 00:14:47.846 14:55:20 -- target/nvmf_lvs_grow.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 4d840b46-fe28-44b9-931a-3304553e2203 00:14:48.104 14:55:21 -- target/nvmf_lvs_grow.sh@69 -- # free_clusters=61 00:14:48.104 14:55:21 -- target/nvmf_lvs_grow.sh@71 -- # [[ '' == \d\i\r\t\y ]] 00:14:48.104 14:55:21 -- target/nvmf_lvs_grow.sh@83 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:14:48.362 [2024-12-01 14:55:21.333404] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:14:48.362 14:55:21 -- target/nvmf_lvs_grow.sh@84 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 4d840b46-fe28-44b9-931a-3304553e2203 00:14:48.362 14:55:21 -- common/autotest_common.sh@650 -- # local es=0 00:14:48.362 14:55:21 -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 4d840b46-fe28-44b9-931a-3304553e2203 00:14:48.362 14:55:21 -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:14:48.362 14:55:21 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:48.362 14:55:21 -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:14:48.362 14:55:21 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:48.362 14:55:21 -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:14:48.362 14:55:21 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:48.362 14:55:21 -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:14:48.362 14:55:21 -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:14:48.362 14:55:21 -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 4d840b46-fe28-44b9-931a-3304553e2203 00:14:48.620 2024/12/01 14:55:21 error on JSON-RPC call, method: bdev_lvol_get_lvstores, params: map[uuid:4d840b46-fe28-44b9-931a-3304553e2203], err: error received for bdev_lvol_get_lvstores method, err: Code=-19 Msg=No such device 00:14:48.620 request: 00:14:48.620 { 00:14:48.620 "method": "bdev_lvol_get_lvstores", 00:14:48.620 "params": { 00:14:48.620 "uuid": "4d840b46-fe28-44b9-931a-3304553e2203" 00:14:48.620 } 00:14:48.620 } 00:14:48.620 Got JSON-RPC error response 00:14:48.620 GoRPCClient: error on JSON-RPC call 00:14:48.620 14:55:21 -- common/autotest_common.sh@653 -- # es=1 00:14:48.620 14:55:21 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:14:48.620 14:55:21 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:14:48.620 14:55:21 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:14:48.620 14:55:21 -- target/nvmf_lvs_grow.sh@85 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:14:48.877 aio_bdev 00:14:48.877 14:55:21 -- target/nvmf_lvs_grow.sh@86 -- # waitforbdev ad2e180b-f641-41c0-90b9-8c29b8de72a5 00:14:48.877 14:55:21 -- common/autotest_common.sh@897 -- # local bdev_name=ad2e180b-f641-41c0-90b9-8c29b8de72a5 00:14:48.877 14:55:21 -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:14:48.877 14:55:21 -- common/autotest_common.sh@899 -- # local i 00:14:48.877 14:55:21 -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:14:48.877 14:55:21 -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:14:48.877 14:55:21 -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:14:49.135 14:55:22 -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ad2e180b-f641-41c0-90b9-8c29b8de72a5 -t 2000 00:14:49.394 [ 00:14:49.394 { 00:14:49.394 "aliases": [ 00:14:49.394 "lvs/lvol" 00:14:49.394 ], 00:14:49.394 "assigned_rate_limits": { 00:14:49.394 "r_mbytes_per_sec": 0, 00:14:49.394 "rw_ios_per_sec": 0, 00:14:49.394 "rw_mbytes_per_sec": 0, 00:14:49.394 "w_mbytes_per_sec": 0 00:14:49.394 }, 00:14:49.394 "block_size": 4096, 00:14:49.394 "claimed": false, 00:14:49.394 "driver_specific": { 00:14:49.394 "lvol": { 00:14:49.394 "base_bdev": "aio_bdev", 00:14:49.394 "clone": false, 00:14:49.394 "esnap_clone": false, 00:14:49.394 "lvol_store_uuid": "4d840b46-fe28-44b9-931a-3304553e2203", 00:14:49.394 "snapshot": false, 00:14:49.394 "thin_provision": false 00:14:49.394 } 00:14:49.394 }, 00:14:49.394 "name": "ad2e180b-f641-41c0-90b9-8c29b8de72a5", 00:14:49.394 "num_blocks": 38912, 00:14:49.394 "product_name": "Logical Volume", 00:14:49.394 "supported_io_types": { 00:14:49.394 "abort": false, 00:14:49.394 "compare": false, 00:14:49.394 "compare_and_write": false, 00:14:49.394 "flush": false, 00:14:49.394 "nvme_admin": false, 00:14:49.394 "nvme_io": false, 00:14:49.394 "read": true, 00:14:49.394 "reset": true, 00:14:49.394 "unmap": true, 00:14:49.394 "write": true, 00:14:49.394 "write_zeroes": true 00:14:49.394 }, 00:14:49.394 "uuid": "ad2e180b-f641-41c0-90b9-8c29b8de72a5", 00:14:49.394 "zoned": false 00:14:49.394 } 00:14:49.394 ] 00:14:49.394 14:55:22 -- common/autotest_common.sh@905 -- # return 0 00:14:49.394 14:55:22 -- target/nvmf_lvs_grow.sh@87 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 4d840b46-fe28-44b9-931a-3304553e2203 00:14:49.394 14:55:22 -- target/nvmf_lvs_grow.sh@87 -- # jq -r '.[0].free_clusters' 00:14:49.653 14:55:22 -- target/nvmf_lvs_grow.sh@87 -- # (( free_clusters == 61 )) 00:14:49.653 14:55:22 -- target/nvmf_lvs_grow.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 4d840b46-fe28-44b9-931a-3304553e2203 00:14:49.653 14:55:22 -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].total_data_clusters' 00:14:49.911 14:55:22 -- target/nvmf_lvs_grow.sh@88 -- # (( data_clusters == 99 )) 00:14:49.911 14:55:22 -- target/nvmf_lvs_grow.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete ad2e180b-f641-41c0-90b9-8c29b8de72a5 00:14:49.911 14:55:23 -- target/nvmf_lvs_grow.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 4d840b46-fe28-44b9-931a-3304553e2203 00:14:50.169 14:55:23 -- target/nvmf_lvs_grow.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:14:50.428 14:55:23 -- target/nvmf_lvs_grow.sh@94 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:14:50.995 ************************************ 00:14:50.995 END TEST lvs_grow_clean 00:14:50.995 ************************************ 00:14:50.995 00:14:50.995 real 0m17.779s 00:14:50.995 user 0m17.199s 00:14:50.995 sys 0m2.109s 00:14:50.995 14:55:23 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:14:50.995 14:55:23 -- common/autotest_common.sh@10 -- # set +x 00:14:50.995 14:55:23 -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_dirty lvs_grow dirty 00:14:50.995 14:55:23 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:14:50.995 14:55:23 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:14:50.995 14:55:23 -- common/autotest_common.sh@10 -- # set +x 00:14:50.995 ************************************ 00:14:50.995 START TEST lvs_grow_dirty 00:14:50.995 ************************************ 00:14:50.995 14:55:23 -- common/autotest_common.sh@1114 -- # lvs_grow dirty 00:14:50.995 14:55:23 -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:14:50.995 14:55:23 -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:14:50.995 14:55:23 -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:14:50.995 14:55:23 -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:14:50.995 14:55:23 -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:14:50.995 14:55:23 -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:14:50.995 14:55:23 -- target/nvmf_lvs_grow.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:14:50.995 14:55:23 -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:14:50.995 14:55:23 -- target/nvmf_lvs_grow.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:14:51.253 14:55:24 -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:14:51.254 14:55:24 -- target/nvmf_lvs_grow.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:14:51.512 14:55:24 -- target/nvmf_lvs_grow.sh@28 -- # lvs=1a36ab41-d758-4634-9f31-955b3ba5ea19 00:14:51.512 14:55:24 -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:14:51.512 14:55:24 -- target/nvmf_lvs_grow.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 1a36ab41-d758-4634-9f31-955b3ba5ea19 00:14:51.771 14:55:24 -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:14:51.771 14:55:24 -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:14:51.771 14:55:24 -- target/nvmf_lvs_grow.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 1a36ab41-d758-4634-9f31-955b3ba5ea19 lvol 150 00:14:52.029 14:55:25 -- target/nvmf_lvs_grow.sh@33 -- # lvol=cbd36e16-af08-491d-80e3-65820ffed728 00:14:52.029 14:55:25 -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:14:52.029 14:55:25 -- target/nvmf_lvs_grow.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:14:52.287 [2024-12-01 14:55:25.186522] bdev_aio.c: 959:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:14:52.287 [2024-12-01 14:55:25.186606] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:14:52.287 true 00:14:52.287 14:55:25 -- target/nvmf_lvs_grow.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 1a36ab41-d758-4634-9f31-955b3ba5ea19 00:14:52.287 14:55:25 -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:14:52.544 14:55:25 -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:14:52.544 14:55:25 -- target/nvmf_lvs_grow.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:14:52.545 14:55:25 -- target/nvmf_lvs_grow.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 cbd36e16-af08-491d-80e3-65820ffed728 00:14:52.803 14:55:25 -- target/nvmf_lvs_grow.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:14:53.062 14:55:26 -- target/nvmf_lvs_grow.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:14:53.321 14:55:26 -- target/nvmf_lvs_grow.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:14:53.321 14:55:26 -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=84333 00:14:53.321 14:55:26 -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:14:53.321 14:55:26 -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 84333 /var/tmp/bdevperf.sock 00:14:53.321 14:55:26 -- common/autotest_common.sh@829 -- # '[' -z 84333 ']' 00:14:53.321 14:55:26 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:14:53.321 14:55:26 -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:53.321 14:55:26 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:14:53.321 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:14:53.321 14:55:26 -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:53.321 14:55:26 -- common/autotest_common.sh@10 -- # set +x 00:14:53.321 [2024-12-01 14:55:26.415515] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:14:53.321 [2024-12-01 14:55:26.415623] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84333 ] 00:14:53.579 [2024-12-01 14:55:26.551282] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:53.579 [2024-12-01 14:55:26.615450] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:14:54.515 14:55:27 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:54.515 14:55:27 -- common/autotest_common.sh@862 -- # return 0 00:14:54.515 14:55:27 -- target/nvmf_lvs_grow.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:14:54.515 Nvme0n1 00:14:54.515 14:55:27 -- target/nvmf_lvs_grow.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:14:54.774 [ 00:14:54.774 { 00:14:54.774 "aliases": [ 00:14:54.774 "cbd36e16-af08-491d-80e3-65820ffed728" 00:14:54.774 ], 00:14:54.774 "assigned_rate_limits": { 00:14:54.774 "r_mbytes_per_sec": 0, 00:14:54.774 "rw_ios_per_sec": 0, 00:14:54.774 "rw_mbytes_per_sec": 0, 00:14:54.774 "w_mbytes_per_sec": 0 00:14:54.774 }, 00:14:54.774 "block_size": 4096, 00:14:54.774 "claimed": false, 00:14:54.774 "driver_specific": { 00:14:54.774 "mp_policy": "active_passive", 00:14:54.774 "nvme": [ 00:14:54.774 { 00:14:54.774 "ctrlr_data": { 00:14:54.774 "ana_reporting": false, 00:14:54.775 "cntlid": 1, 00:14:54.775 "firmware_revision": "24.01.1", 00:14:54.775 "model_number": "SPDK bdev Controller", 00:14:54.775 "multi_ctrlr": true, 00:14:54.775 "oacs": { 00:14:54.775 "firmware": 0, 00:14:54.775 "format": 0, 00:14:54.775 "ns_manage": 0, 00:14:54.775 "security": 0 00:14:54.775 }, 00:14:54.775 "serial_number": "SPDK0", 00:14:54.775 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:14:54.775 "vendor_id": "0x8086" 00:14:54.775 }, 00:14:54.775 "ns_data": { 00:14:54.775 "can_share": true, 00:14:54.775 "id": 1 00:14:54.775 }, 00:14:54.775 "trid": { 00:14:54.775 "adrfam": "IPv4", 00:14:54.775 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:14:54.775 "traddr": "10.0.0.2", 00:14:54.775 "trsvcid": "4420", 00:14:54.775 "trtype": "TCP" 00:14:54.775 }, 00:14:54.775 "vs": { 00:14:54.775 "nvme_version": "1.3" 00:14:54.775 } 00:14:54.775 } 00:14:54.775 ] 00:14:54.775 }, 00:14:54.775 "name": "Nvme0n1", 00:14:54.775 "num_blocks": 38912, 00:14:54.775 "product_name": "NVMe disk", 00:14:54.775 "supported_io_types": { 00:14:54.775 "abort": true, 00:14:54.775 "compare": true, 00:14:54.775 "compare_and_write": true, 00:14:54.775 "flush": true, 00:14:54.775 "nvme_admin": true, 00:14:54.775 "nvme_io": true, 00:14:54.775 "read": true, 00:14:54.775 "reset": true, 00:14:54.775 "unmap": true, 00:14:54.775 "write": true, 00:14:54.775 "write_zeroes": true 00:14:54.775 }, 00:14:54.775 "uuid": "cbd36e16-af08-491d-80e3-65820ffed728", 00:14:54.775 "zoned": false 00:14:54.775 } 00:14:54.775 ] 00:14:54.775 14:55:27 -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=84382 00:14:54.775 14:55:27 -- target/nvmf_lvs_grow.sh@55 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:14:54.775 14:55:27 -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:14:55.034 Running I/O for 10 seconds... 00:14:55.971 Latency(us) 00:14:55.971 [2024-12-01T14:55:29.086Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:55.971 [2024-12-01T14:55:29.086Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:55.971 Nvme0n1 : 1.00 7288.00 28.47 0.00 0.00 0.00 0.00 0.00 00:14:55.971 [2024-12-01T14:55:29.086Z] =================================================================================================================== 00:14:55.971 [2024-12-01T14:55:29.086Z] Total : 7288.00 28.47 0.00 0.00 0.00 0.00 0.00 00:14:55.971 00:14:56.907 14:55:29 -- target/nvmf_lvs_grow.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 1a36ab41-d758-4634-9f31-955b3ba5ea19 00:14:56.907 [2024-12-01T14:55:30.022Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:56.907 Nvme0n1 : 2.00 7304.00 28.53 0.00 0.00 0.00 0.00 0.00 00:14:56.907 [2024-12-01T14:55:30.022Z] =================================================================================================================== 00:14:56.907 [2024-12-01T14:55:30.022Z] Total : 7304.00 28.53 0.00 0.00 0.00 0.00 0.00 00:14:56.907 00:14:57.165 true 00:14:57.165 14:55:30 -- target/nvmf_lvs_grow.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 1a36ab41-d758-4634-9f31-955b3ba5ea19 00:14:57.165 14:55:30 -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:14:57.423 14:55:30 -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:14:57.423 14:55:30 -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:14:57.423 14:55:30 -- target/nvmf_lvs_grow.sh@65 -- # wait 84382 00:14:57.990 [2024-12-01T14:55:31.105Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:57.990 Nvme0n1 : 3.00 7324.00 28.61 0.00 0.00 0.00 0.00 0.00 00:14:57.990 [2024-12-01T14:55:31.105Z] =================================================================================================================== 00:14:57.990 [2024-12-01T14:55:31.105Z] Total : 7324.00 28.61 0.00 0.00 0.00 0.00 0.00 00:14:57.990 00:14:58.925 [2024-12-01T14:55:32.040Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:58.925 Nvme0n1 : 4.00 7364.50 28.77 0.00 0.00 0.00 0.00 0.00 00:14:58.925 [2024-12-01T14:55:32.040Z] =================================================================================================================== 00:14:58.925 [2024-12-01T14:55:32.040Z] Total : 7364.50 28.77 0.00 0.00 0.00 0.00 0.00 00:14:58.925 00:14:59.859 [2024-12-01T14:55:32.974Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:59.860 Nvme0n1 : 5.00 7358.40 28.74 0.00 0.00 0.00 0.00 0.00 00:14:59.860 [2024-12-01T14:55:32.975Z] =================================================================================================================== 00:14:59.860 [2024-12-01T14:55:32.975Z] Total : 7358.40 28.74 0.00 0.00 0.00 0.00 0.00 00:14:59.860 00:15:01.232 [2024-12-01T14:55:34.347Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:15:01.232 Nvme0n1 : 6.00 7223.50 28.22 0.00 0.00 0.00 0.00 0.00 00:15:01.232 [2024-12-01T14:55:34.347Z] =================================================================================================================== 00:15:01.232 [2024-12-01T14:55:34.347Z] Total : 7223.50 28.22 0.00 0.00 0.00 0.00 0.00 00:15:01.232 00:15:02.213 [2024-12-01T14:55:35.328Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:15:02.213 Nvme0n1 : 7.00 7231.43 28.25 0.00 0.00 0.00 0.00 0.00 00:15:02.213 [2024-12-01T14:55:35.328Z] =================================================================================================================== 00:15:02.213 [2024-12-01T14:55:35.328Z] Total : 7231.43 28.25 0.00 0.00 0.00 0.00 0.00 00:15:02.213 00:15:03.161 [2024-12-01T14:55:36.276Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:15:03.161 Nvme0n1 : 8.00 7242.88 28.29 0.00 0.00 0.00 0.00 0.00 00:15:03.161 [2024-12-01T14:55:36.276Z] =================================================================================================================== 00:15:03.161 [2024-12-01T14:55:36.276Z] Total : 7242.88 28.29 0.00 0.00 0.00 0.00 0.00 00:15:03.161 00:15:04.096 [2024-12-01T14:55:37.211Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:15:04.096 Nvme0n1 : 9.00 7245.56 28.30 0.00 0.00 0.00 0.00 0.00 00:15:04.096 [2024-12-01T14:55:37.211Z] =================================================================================================================== 00:15:04.096 [2024-12-01T14:55:37.211Z] Total : 7245.56 28.30 0.00 0.00 0.00 0.00 0.00 00:15:04.096 00:15:05.030 [2024-12-01T14:55:38.145Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:15:05.030 Nvme0n1 : 10.00 7269.70 28.40 0.00 0.00 0.00 0.00 0.00 00:15:05.030 [2024-12-01T14:55:38.145Z] =================================================================================================================== 00:15:05.030 [2024-12-01T14:55:38.145Z] Total : 7269.70 28.40 0.00 0.00 0.00 0.00 0.00 00:15:05.030 00:15:05.030 00:15:05.030 Latency(us) 00:15:05.030 [2024-12-01T14:55:38.145Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:05.030 [2024-12-01T14:55:38.145Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:15:05.030 Nvme0n1 : 10.02 7269.77 28.40 0.00 0.00 17596.90 4051.32 154426.65 00:15:05.030 [2024-12-01T14:55:38.145Z] =================================================================================================================== 00:15:05.030 [2024-12-01T14:55:38.145Z] Total : 7269.77 28.40 0.00 0.00 17596.90 4051.32 154426.65 00:15:05.030 0 00:15:05.030 14:55:37 -- target/nvmf_lvs_grow.sh@66 -- # killprocess 84333 00:15:05.030 14:55:37 -- common/autotest_common.sh@936 -- # '[' -z 84333 ']' 00:15:05.030 14:55:37 -- common/autotest_common.sh@940 -- # kill -0 84333 00:15:05.030 14:55:37 -- common/autotest_common.sh@941 -- # uname 00:15:05.030 14:55:37 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:15:05.030 14:55:37 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 84333 00:15:05.030 14:55:38 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:15:05.030 14:55:38 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:15:05.030 killing process with pid 84333 00:15:05.030 14:55:38 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 84333' 00:15:05.030 Received shutdown signal, test time was about 10.000000 seconds 00:15:05.030 00:15:05.030 Latency(us) 00:15:05.030 [2024-12-01T14:55:38.145Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:05.030 [2024-12-01T14:55:38.145Z] =================================================================================================================== 00:15:05.030 [2024-12-01T14:55:38.145Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:15:05.030 14:55:38 -- common/autotest_common.sh@955 -- # kill 84333 00:15:05.030 14:55:38 -- common/autotest_common.sh@960 -- # wait 84333 00:15:05.289 14:55:38 -- target/nvmf_lvs_grow.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:15:05.547 14:55:38 -- target/nvmf_lvs_grow.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 1a36ab41-d758-4634-9f31-955b3ba5ea19 00:15:05.547 14:55:38 -- target/nvmf_lvs_grow.sh@69 -- # jq -r '.[0].free_clusters' 00:15:05.806 14:55:38 -- target/nvmf_lvs_grow.sh@69 -- # free_clusters=61 00:15:05.806 14:55:38 -- target/nvmf_lvs_grow.sh@71 -- # [[ dirty == \d\i\r\t\y ]] 00:15:05.806 14:55:38 -- target/nvmf_lvs_grow.sh@73 -- # kill -9 83735 00:15:05.806 14:55:38 -- target/nvmf_lvs_grow.sh@74 -- # wait 83735 00:15:05.806 /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 74: 83735 Killed "${NVMF_APP[@]}" "$@" 00:15:05.806 14:55:38 -- target/nvmf_lvs_grow.sh@74 -- # true 00:15:05.806 14:55:38 -- target/nvmf_lvs_grow.sh@75 -- # nvmfappstart -m 0x1 00:15:05.806 14:55:38 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:15:05.806 14:55:38 -- common/autotest_common.sh@722 -- # xtrace_disable 00:15:05.806 14:55:38 -- common/autotest_common.sh@10 -- # set +x 00:15:05.806 14:55:38 -- nvmf/common.sh@469 -- # nvmfpid=84533 00:15:05.806 14:55:38 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:15:05.806 14:55:38 -- nvmf/common.sh@470 -- # waitforlisten 84533 00:15:05.806 14:55:38 -- common/autotest_common.sh@829 -- # '[' -z 84533 ']' 00:15:05.806 14:55:38 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:05.806 14:55:38 -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:05.806 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:05.806 14:55:38 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:05.806 14:55:38 -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:05.806 14:55:38 -- common/autotest_common.sh@10 -- # set +x 00:15:05.806 [2024-12-01 14:55:38.823111] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:15:05.806 [2024-12-01 14:55:38.823216] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:06.064 [2024-12-01 14:55:38.965019] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:06.064 [2024-12-01 14:55:39.053276] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:15:06.064 [2024-12-01 14:55:39.053481] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:06.064 [2024-12-01 14:55:39.053492] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:06.064 [2024-12-01 14:55:39.053500] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:06.064 [2024-12-01 14:55:39.053531] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:07.000 14:55:39 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:07.000 14:55:39 -- common/autotest_common.sh@862 -- # return 0 00:15:07.000 14:55:39 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:15:07.000 14:55:39 -- common/autotest_common.sh@728 -- # xtrace_disable 00:15:07.000 14:55:39 -- common/autotest_common.sh@10 -- # set +x 00:15:07.000 14:55:39 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:07.000 14:55:39 -- target/nvmf_lvs_grow.sh@76 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:15:07.000 [2024-12-01 14:55:40.066313] blobstore.c:4642:bs_recover: *NOTICE*: Performing recovery on blobstore 00:15:07.000 [2024-12-01 14:55:40.066686] blobstore.c:4589:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:15:07.000 [2024-12-01 14:55:40.066880] blobstore.c:4589:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:15:07.259 14:55:40 -- target/nvmf_lvs_grow.sh@76 -- # aio_bdev=aio_bdev 00:15:07.259 14:55:40 -- target/nvmf_lvs_grow.sh@77 -- # waitforbdev cbd36e16-af08-491d-80e3-65820ffed728 00:15:07.259 14:55:40 -- common/autotest_common.sh@897 -- # local bdev_name=cbd36e16-af08-491d-80e3-65820ffed728 00:15:07.259 14:55:40 -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:15:07.259 14:55:40 -- common/autotest_common.sh@899 -- # local i 00:15:07.259 14:55:40 -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:15:07.259 14:55:40 -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:15:07.259 14:55:40 -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:15:07.519 14:55:40 -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b cbd36e16-af08-491d-80e3-65820ffed728 -t 2000 00:15:07.519 [ 00:15:07.519 { 00:15:07.519 "aliases": [ 00:15:07.519 "lvs/lvol" 00:15:07.519 ], 00:15:07.519 "assigned_rate_limits": { 00:15:07.519 "r_mbytes_per_sec": 0, 00:15:07.519 "rw_ios_per_sec": 0, 00:15:07.519 "rw_mbytes_per_sec": 0, 00:15:07.519 "w_mbytes_per_sec": 0 00:15:07.519 }, 00:15:07.519 "block_size": 4096, 00:15:07.519 "claimed": false, 00:15:07.519 "driver_specific": { 00:15:07.519 "lvol": { 00:15:07.519 "base_bdev": "aio_bdev", 00:15:07.519 "clone": false, 00:15:07.519 "esnap_clone": false, 00:15:07.519 "lvol_store_uuid": "1a36ab41-d758-4634-9f31-955b3ba5ea19", 00:15:07.519 "snapshot": false, 00:15:07.519 "thin_provision": false 00:15:07.519 } 00:15:07.519 }, 00:15:07.519 "name": "cbd36e16-af08-491d-80e3-65820ffed728", 00:15:07.519 "num_blocks": 38912, 00:15:07.519 "product_name": "Logical Volume", 00:15:07.519 "supported_io_types": { 00:15:07.519 "abort": false, 00:15:07.519 "compare": false, 00:15:07.519 "compare_and_write": false, 00:15:07.519 "flush": false, 00:15:07.519 "nvme_admin": false, 00:15:07.519 "nvme_io": false, 00:15:07.519 "read": true, 00:15:07.519 "reset": true, 00:15:07.519 "unmap": true, 00:15:07.519 "write": true, 00:15:07.519 "write_zeroes": true 00:15:07.519 }, 00:15:07.519 "uuid": "cbd36e16-af08-491d-80e3-65820ffed728", 00:15:07.519 "zoned": false 00:15:07.519 } 00:15:07.519 ] 00:15:07.519 14:55:40 -- common/autotest_common.sh@905 -- # return 0 00:15:07.519 14:55:40 -- target/nvmf_lvs_grow.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 1a36ab41-d758-4634-9f31-955b3ba5ea19 00:15:07.519 14:55:40 -- target/nvmf_lvs_grow.sh@78 -- # jq -r '.[0].free_clusters' 00:15:07.777 14:55:40 -- target/nvmf_lvs_grow.sh@78 -- # (( free_clusters == 61 )) 00:15:07.777 14:55:40 -- target/nvmf_lvs_grow.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 1a36ab41-d758-4634-9f31-955b3ba5ea19 00:15:07.777 14:55:40 -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].total_data_clusters' 00:15:08.035 14:55:41 -- target/nvmf_lvs_grow.sh@79 -- # (( data_clusters == 99 )) 00:15:08.035 14:55:41 -- target/nvmf_lvs_grow.sh@83 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:15:08.293 [2024-12-01 14:55:41.347430] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:15:08.293 14:55:41 -- target/nvmf_lvs_grow.sh@84 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 1a36ab41-d758-4634-9f31-955b3ba5ea19 00:15:08.293 14:55:41 -- common/autotest_common.sh@650 -- # local es=0 00:15:08.293 14:55:41 -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 1a36ab41-d758-4634-9f31-955b3ba5ea19 00:15:08.293 14:55:41 -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:08.293 14:55:41 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:08.293 14:55:41 -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:08.293 14:55:41 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:08.293 14:55:41 -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:08.293 14:55:41 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:08.293 14:55:41 -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:08.293 14:55:41 -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:15:08.293 14:55:41 -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 1a36ab41-d758-4634-9f31-955b3ba5ea19 00:15:08.551 2024/12/01 14:55:41 error on JSON-RPC call, method: bdev_lvol_get_lvstores, params: map[uuid:1a36ab41-d758-4634-9f31-955b3ba5ea19], err: error received for bdev_lvol_get_lvstores method, err: Code=-19 Msg=No such device 00:15:08.551 request: 00:15:08.551 { 00:15:08.551 "method": "bdev_lvol_get_lvstores", 00:15:08.551 "params": { 00:15:08.551 "uuid": "1a36ab41-d758-4634-9f31-955b3ba5ea19" 00:15:08.551 } 00:15:08.551 } 00:15:08.551 Got JSON-RPC error response 00:15:08.551 GoRPCClient: error on JSON-RPC call 00:15:08.551 14:55:41 -- common/autotest_common.sh@653 -- # es=1 00:15:08.551 14:55:41 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:15:08.551 14:55:41 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:15:08.551 14:55:41 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:15:08.551 14:55:41 -- target/nvmf_lvs_grow.sh@85 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:15:08.808 aio_bdev 00:15:08.808 14:55:41 -- target/nvmf_lvs_grow.sh@86 -- # waitforbdev cbd36e16-af08-491d-80e3-65820ffed728 00:15:08.808 14:55:41 -- common/autotest_common.sh@897 -- # local bdev_name=cbd36e16-af08-491d-80e3-65820ffed728 00:15:08.808 14:55:41 -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:15:08.808 14:55:41 -- common/autotest_common.sh@899 -- # local i 00:15:08.808 14:55:41 -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:15:08.808 14:55:41 -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:15:08.808 14:55:41 -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:15:09.067 14:55:42 -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b cbd36e16-af08-491d-80e3-65820ffed728 -t 2000 00:15:09.325 [ 00:15:09.325 { 00:15:09.325 "aliases": [ 00:15:09.325 "lvs/lvol" 00:15:09.325 ], 00:15:09.325 "assigned_rate_limits": { 00:15:09.325 "r_mbytes_per_sec": 0, 00:15:09.325 "rw_ios_per_sec": 0, 00:15:09.325 "rw_mbytes_per_sec": 0, 00:15:09.325 "w_mbytes_per_sec": 0 00:15:09.325 }, 00:15:09.325 "block_size": 4096, 00:15:09.325 "claimed": false, 00:15:09.325 "driver_specific": { 00:15:09.326 "lvol": { 00:15:09.326 "base_bdev": "aio_bdev", 00:15:09.326 "clone": false, 00:15:09.326 "esnap_clone": false, 00:15:09.326 "lvol_store_uuid": "1a36ab41-d758-4634-9f31-955b3ba5ea19", 00:15:09.326 "snapshot": false, 00:15:09.326 "thin_provision": false 00:15:09.326 } 00:15:09.326 }, 00:15:09.326 "name": "cbd36e16-af08-491d-80e3-65820ffed728", 00:15:09.326 "num_blocks": 38912, 00:15:09.326 "product_name": "Logical Volume", 00:15:09.326 "supported_io_types": { 00:15:09.326 "abort": false, 00:15:09.326 "compare": false, 00:15:09.326 "compare_and_write": false, 00:15:09.326 "flush": false, 00:15:09.326 "nvme_admin": false, 00:15:09.326 "nvme_io": false, 00:15:09.326 "read": true, 00:15:09.326 "reset": true, 00:15:09.326 "unmap": true, 00:15:09.326 "write": true, 00:15:09.326 "write_zeroes": true 00:15:09.326 }, 00:15:09.326 "uuid": "cbd36e16-af08-491d-80e3-65820ffed728", 00:15:09.326 "zoned": false 00:15:09.326 } 00:15:09.326 ] 00:15:09.326 14:55:42 -- common/autotest_common.sh@905 -- # return 0 00:15:09.326 14:55:42 -- target/nvmf_lvs_grow.sh@87 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 1a36ab41-d758-4634-9f31-955b3ba5ea19 00:15:09.326 14:55:42 -- target/nvmf_lvs_grow.sh@87 -- # jq -r '.[0].free_clusters' 00:15:09.587 14:55:42 -- target/nvmf_lvs_grow.sh@87 -- # (( free_clusters == 61 )) 00:15:09.587 14:55:42 -- target/nvmf_lvs_grow.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 1a36ab41-d758-4634-9f31-955b3ba5ea19 00:15:09.587 14:55:42 -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].total_data_clusters' 00:15:09.587 14:55:42 -- target/nvmf_lvs_grow.sh@88 -- # (( data_clusters == 99 )) 00:15:09.587 14:55:42 -- target/nvmf_lvs_grow.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete cbd36e16-af08-491d-80e3-65820ffed728 00:15:09.844 14:55:42 -- target/nvmf_lvs_grow.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 1a36ab41-d758-4634-9f31-955b3ba5ea19 00:15:10.102 14:55:43 -- target/nvmf_lvs_grow.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:15:10.361 14:55:43 -- target/nvmf_lvs_grow.sh@94 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:15:10.927 00:15:10.927 real 0m19.812s 00:15:10.927 user 0m38.470s 00:15:10.927 sys 0m10.098s 00:15:10.927 14:55:43 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:15:10.927 ************************************ 00:15:10.927 END TEST lvs_grow_dirty 00:15:10.927 ************************************ 00:15:10.927 14:55:43 -- common/autotest_common.sh@10 -- # set +x 00:15:10.927 14:55:43 -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:15:10.927 14:55:43 -- common/autotest_common.sh@806 -- # type=--id 00:15:10.927 14:55:43 -- common/autotest_common.sh@807 -- # id=0 00:15:10.927 14:55:43 -- common/autotest_common.sh@808 -- # '[' --id = --pid ']' 00:15:10.927 14:55:43 -- common/autotest_common.sh@812 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:15:10.927 14:55:43 -- common/autotest_common.sh@812 -- # shm_files=nvmf_trace.0 00:15:10.927 14:55:43 -- common/autotest_common.sh@814 -- # [[ -z nvmf_trace.0 ]] 00:15:10.927 14:55:43 -- common/autotest_common.sh@818 -- # for n in $shm_files 00:15:10.927 14:55:43 -- common/autotest_common.sh@819 -- # tar -C /dev/shm/ -cvzf /home/vagrant/spdk_repo/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:15:10.927 nvmf_trace.0 00:15:10.927 14:55:43 -- common/autotest_common.sh@821 -- # return 0 00:15:10.927 14:55:43 -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:15:10.927 14:55:43 -- nvmf/common.sh@476 -- # nvmfcleanup 00:15:10.927 14:55:43 -- nvmf/common.sh@116 -- # sync 00:15:11.496 14:55:44 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:15:11.496 14:55:44 -- nvmf/common.sh@119 -- # set +e 00:15:11.496 14:55:44 -- nvmf/common.sh@120 -- # for i in {1..20} 00:15:11.496 14:55:44 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:15:11.496 rmmod nvme_tcp 00:15:11.496 rmmod nvme_fabrics 00:15:11.496 rmmod nvme_keyring 00:15:11.496 14:55:44 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:15:11.496 14:55:44 -- nvmf/common.sh@123 -- # set -e 00:15:11.496 14:55:44 -- nvmf/common.sh@124 -- # return 0 00:15:11.496 14:55:44 -- nvmf/common.sh@477 -- # '[' -n 84533 ']' 00:15:11.496 14:55:44 -- nvmf/common.sh@478 -- # killprocess 84533 00:15:11.496 14:55:44 -- common/autotest_common.sh@936 -- # '[' -z 84533 ']' 00:15:11.496 14:55:44 -- common/autotest_common.sh@940 -- # kill -0 84533 00:15:11.496 14:55:44 -- common/autotest_common.sh@941 -- # uname 00:15:11.496 14:55:44 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:15:11.496 14:55:44 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 84533 00:15:11.496 14:55:44 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:15:11.496 killing process with pid 84533 00:15:11.496 14:55:44 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:15:11.496 14:55:44 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 84533' 00:15:11.496 14:55:44 -- common/autotest_common.sh@955 -- # kill 84533 00:15:11.496 14:55:44 -- common/autotest_common.sh@960 -- # wait 84533 00:15:11.754 14:55:44 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:15:11.754 14:55:44 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:15:11.754 14:55:44 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:15:11.754 14:55:44 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:11.754 14:55:44 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:15:11.754 14:55:44 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:11.754 14:55:44 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:11.754 14:55:44 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:12.012 14:55:44 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:15:12.012 00:15:12.012 real 0m40.766s 00:15:12.012 user 1m2.380s 00:15:12.012 sys 0m13.550s 00:15:12.012 14:55:44 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:15:12.012 ************************************ 00:15:12.012 END TEST nvmf_lvs_grow 00:15:12.012 14:55:44 -- common/autotest_common.sh@10 -- # set +x 00:15:12.012 ************************************ 00:15:12.012 14:55:44 -- nvmf/nvmf.sh@49 -- # run_test nvmf_bdev_io_wait /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:15:12.012 14:55:44 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:15:12.012 14:55:44 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:15:12.012 14:55:44 -- common/autotest_common.sh@10 -- # set +x 00:15:12.012 ************************************ 00:15:12.012 START TEST nvmf_bdev_io_wait 00:15:12.012 ************************************ 00:15:12.012 14:55:44 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:15:12.012 * Looking for test storage... 00:15:12.012 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:15:12.012 14:55:45 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:15:12.012 14:55:45 -- common/autotest_common.sh@1690 -- # lcov --version 00:15:12.012 14:55:45 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:15:12.012 14:55:45 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:15:12.012 14:55:45 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:15:12.012 14:55:45 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:15:12.012 14:55:45 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:15:12.012 14:55:45 -- scripts/common.sh@335 -- # IFS=.-: 00:15:12.012 14:55:45 -- scripts/common.sh@335 -- # read -ra ver1 00:15:12.012 14:55:45 -- scripts/common.sh@336 -- # IFS=.-: 00:15:12.012 14:55:45 -- scripts/common.sh@336 -- # read -ra ver2 00:15:12.012 14:55:45 -- scripts/common.sh@337 -- # local 'op=<' 00:15:12.012 14:55:45 -- scripts/common.sh@339 -- # ver1_l=2 00:15:12.012 14:55:45 -- scripts/common.sh@340 -- # ver2_l=1 00:15:12.012 14:55:45 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:15:12.012 14:55:45 -- scripts/common.sh@343 -- # case "$op" in 00:15:12.012 14:55:45 -- scripts/common.sh@344 -- # : 1 00:15:12.012 14:55:45 -- scripts/common.sh@363 -- # (( v = 0 )) 00:15:12.012 14:55:45 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:12.012 14:55:45 -- scripts/common.sh@364 -- # decimal 1 00:15:12.012 14:55:45 -- scripts/common.sh@352 -- # local d=1 00:15:12.012 14:55:45 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:12.012 14:55:45 -- scripts/common.sh@354 -- # echo 1 00:15:12.012 14:55:45 -- scripts/common.sh@364 -- # ver1[v]=1 00:15:12.012 14:55:45 -- scripts/common.sh@365 -- # decimal 2 00:15:12.012 14:55:45 -- scripts/common.sh@352 -- # local d=2 00:15:12.012 14:55:45 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:12.012 14:55:45 -- scripts/common.sh@354 -- # echo 2 00:15:12.012 14:55:45 -- scripts/common.sh@365 -- # ver2[v]=2 00:15:12.013 14:55:45 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:15:12.013 14:55:45 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:15:12.013 14:55:45 -- scripts/common.sh@367 -- # return 0 00:15:12.013 14:55:45 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:12.013 14:55:45 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:15:12.013 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:12.013 --rc genhtml_branch_coverage=1 00:15:12.013 --rc genhtml_function_coverage=1 00:15:12.013 --rc genhtml_legend=1 00:15:12.013 --rc geninfo_all_blocks=1 00:15:12.013 --rc geninfo_unexecuted_blocks=1 00:15:12.013 00:15:12.013 ' 00:15:12.013 14:55:45 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:15:12.013 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:12.013 --rc genhtml_branch_coverage=1 00:15:12.013 --rc genhtml_function_coverage=1 00:15:12.013 --rc genhtml_legend=1 00:15:12.013 --rc geninfo_all_blocks=1 00:15:12.013 --rc geninfo_unexecuted_blocks=1 00:15:12.013 00:15:12.013 ' 00:15:12.013 14:55:45 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:15:12.013 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:12.013 --rc genhtml_branch_coverage=1 00:15:12.013 --rc genhtml_function_coverage=1 00:15:12.013 --rc genhtml_legend=1 00:15:12.013 --rc geninfo_all_blocks=1 00:15:12.013 --rc geninfo_unexecuted_blocks=1 00:15:12.013 00:15:12.013 ' 00:15:12.013 14:55:45 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:15:12.013 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:12.013 --rc genhtml_branch_coverage=1 00:15:12.013 --rc genhtml_function_coverage=1 00:15:12.013 --rc genhtml_legend=1 00:15:12.013 --rc geninfo_all_blocks=1 00:15:12.013 --rc geninfo_unexecuted_blocks=1 00:15:12.013 00:15:12.013 ' 00:15:12.013 14:55:45 -- target/bdev_io_wait.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:15:12.013 14:55:45 -- nvmf/common.sh@7 -- # uname -s 00:15:12.013 14:55:45 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:12.013 14:55:45 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:12.013 14:55:45 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:12.013 14:55:45 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:12.013 14:55:45 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:12.013 14:55:45 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:12.013 14:55:45 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:12.013 14:55:45 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:12.013 14:55:45 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:12.013 14:55:45 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:12.271 14:55:45 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:2d843004-a791-47f3-8dd7-3d04462c368b 00:15:12.271 14:55:45 -- nvmf/common.sh@18 -- # NVME_HOSTID=2d843004-a791-47f3-8dd7-3d04462c368b 00:15:12.271 14:55:45 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:12.271 14:55:45 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:12.271 14:55:45 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:15:12.271 14:55:45 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:15:12.271 14:55:45 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:12.271 14:55:45 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:12.271 14:55:45 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:12.271 14:55:45 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:12.271 14:55:45 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:12.272 14:55:45 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:12.272 14:55:45 -- paths/export.sh@5 -- # export PATH 00:15:12.272 14:55:45 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:12.272 14:55:45 -- nvmf/common.sh@46 -- # : 0 00:15:12.272 14:55:45 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:15:12.272 14:55:45 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:15:12.272 14:55:45 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:15:12.272 14:55:45 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:12.272 14:55:45 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:12.272 14:55:45 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:15:12.272 14:55:45 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:15:12.272 14:55:45 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:15:12.272 14:55:45 -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:15:12.272 14:55:45 -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:15:12.272 14:55:45 -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:15:12.272 14:55:45 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:15:12.272 14:55:45 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:12.272 14:55:45 -- nvmf/common.sh@436 -- # prepare_net_devs 00:15:12.272 14:55:45 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:15:12.272 14:55:45 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:15:12.272 14:55:45 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:12.272 14:55:45 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:12.272 14:55:45 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:12.272 14:55:45 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:15:12.272 14:55:45 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:15:12.272 14:55:45 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:15:12.272 14:55:45 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:15:12.272 14:55:45 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:15:12.272 14:55:45 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:15:12.272 14:55:45 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:12.272 14:55:45 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:12.272 14:55:45 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:15:12.272 14:55:45 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:15:12.272 14:55:45 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:15:12.272 14:55:45 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:15:12.272 14:55:45 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:15:12.272 14:55:45 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:12.272 14:55:45 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:15:12.272 14:55:45 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:15:12.272 14:55:45 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:15:12.272 14:55:45 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:15:12.272 14:55:45 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:15:12.272 14:55:45 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:15:12.272 Cannot find device "nvmf_tgt_br" 00:15:12.272 14:55:45 -- nvmf/common.sh@154 -- # true 00:15:12.272 14:55:45 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:15:12.272 Cannot find device "nvmf_tgt_br2" 00:15:12.272 14:55:45 -- nvmf/common.sh@155 -- # true 00:15:12.272 14:55:45 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:15:12.272 14:55:45 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:15:12.272 Cannot find device "nvmf_tgt_br" 00:15:12.272 14:55:45 -- nvmf/common.sh@157 -- # true 00:15:12.272 14:55:45 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:15:12.272 Cannot find device "nvmf_tgt_br2" 00:15:12.272 14:55:45 -- nvmf/common.sh@158 -- # true 00:15:12.272 14:55:45 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:15:12.272 14:55:45 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:15:12.272 14:55:45 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:15:12.272 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:12.272 14:55:45 -- nvmf/common.sh@161 -- # true 00:15:12.272 14:55:45 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:15:12.272 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:12.272 14:55:45 -- nvmf/common.sh@162 -- # true 00:15:12.272 14:55:45 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:15:12.272 14:55:45 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:15:12.272 14:55:45 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:15:12.272 14:55:45 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:15:12.272 14:55:45 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:15:12.272 14:55:45 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:15:12.272 14:55:45 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:15:12.272 14:55:45 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:15:12.272 14:55:45 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:15:12.272 14:55:45 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:15:12.272 14:55:45 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:15:12.272 14:55:45 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:15:12.272 14:55:45 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:15:12.272 14:55:45 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:15:12.272 14:55:45 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:15:12.272 14:55:45 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:15:12.272 14:55:45 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:15:12.530 14:55:45 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:15:12.530 14:55:45 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:15:12.530 14:55:45 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:15:12.530 14:55:45 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:15:12.530 14:55:45 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:15:12.531 14:55:45 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:15:12.531 14:55:45 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:15:12.531 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:12.531 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.085 ms 00:15:12.531 00:15:12.531 --- 10.0.0.2 ping statistics --- 00:15:12.531 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:12.531 rtt min/avg/max/mdev = 0.085/0.085/0.085/0.000 ms 00:15:12.531 14:55:45 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:15:12.531 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:15:12.531 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.051 ms 00:15:12.531 00:15:12.531 --- 10.0.0.3 ping statistics --- 00:15:12.531 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:12.531 rtt min/avg/max/mdev = 0.051/0.051/0.051/0.000 ms 00:15:12.531 14:55:45 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:15:12.531 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:12.531 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.032 ms 00:15:12.531 00:15:12.531 --- 10.0.0.1 ping statistics --- 00:15:12.531 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:12.531 rtt min/avg/max/mdev = 0.032/0.032/0.032/0.000 ms 00:15:12.531 14:55:45 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:12.531 14:55:45 -- nvmf/common.sh@421 -- # return 0 00:15:12.531 14:55:45 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:15:12.531 14:55:45 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:12.531 14:55:45 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:15:12.531 14:55:45 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:15:12.531 14:55:45 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:12.531 14:55:45 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:15:12.531 14:55:45 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:15:12.531 14:55:45 -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:15:12.531 14:55:45 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:15:12.531 14:55:45 -- common/autotest_common.sh@722 -- # xtrace_disable 00:15:12.531 14:55:45 -- common/autotest_common.sh@10 -- # set +x 00:15:12.531 14:55:45 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:15:12.531 14:55:45 -- nvmf/common.sh@469 -- # nvmfpid=84959 00:15:12.531 14:55:45 -- nvmf/common.sh@470 -- # waitforlisten 84959 00:15:12.531 14:55:45 -- common/autotest_common.sh@829 -- # '[' -z 84959 ']' 00:15:12.531 14:55:45 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:12.531 14:55:45 -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:12.531 14:55:45 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:12.531 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:12.531 14:55:45 -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:12.531 14:55:45 -- common/autotest_common.sh@10 -- # set +x 00:15:12.531 [2024-12-01 14:55:45.530239] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:15:12.531 [2024-12-01 14:55:45.530321] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:12.789 [2024-12-01 14:55:45.667486] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:15:12.789 [2024-12-01 14:55:45.754821] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:15:12.789 [2024-12-01 14:55:45.755012] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:12.789 [2024-12-01 14:55:45.755026] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:12.789 [2024-12-01 14:55:45.755035] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:12.789 [2024-12-01 14:55:45.755203] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:15:12.789 [2024-12-01 14:55:45.755252] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:15:12.789 [2024-12-01 14:55:45.755704] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:15:12.789 [2024-12-01 14:55:45.755741] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:12.789 14:55:45 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:12.789 14:55:45 -- common/autotest_common.sh@862 -- # return 0 00:15:12.789 14:55:45 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:15:12.789 14:55:45 -- common/autotest_common.sh@728 -- # xtrace_disable 00:15:12.789 14:55:45 -- common/autotest_common.sh@10 -- # set +x 00:15:12.789 14:55:45 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:12.789 14:55:45 -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:15:12.789 14:55:45 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:12.789 14:55:45 -- common/autotest_common.sh@10 -- # set +x 00:15:12.789 14:55:45 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:12.789 14:55:45 -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:15:12.789 14:55:45 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:12.789 14:55:45 -- common/autotest_common.sh@10 -- # set +x 00:15:13.050 14:55:45 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:13.050 14:55:45 -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:15:13.050 14:55:45 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:13.050 14:55:45 -- common/autotest_common.sh@10 -- # set +x 00:15:13.050 [2024-12-01 14:55:45.952684] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:13.050 14:55:45 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:13.050 14:55:45 -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:15:13.050 14:55:45 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:13.050 14:55:45 -- common/autotest_common.sh@10 -- # set +x 00:15:13.050 Malloc0 00:15:13.050 14:55:45 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:13.050 14:55:45 -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:15:13.050 14:55:45 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:13.050 14:55:45 -- common/autotest_common.sh@10 -- # set +x 00:15:13.050 14:55:46 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:13.050 14:55:46 -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:15:13.050 14:55:46 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:13.050 14:55:46 -- common/autotest_common.sh@10 -- # set +x 00:15:13.050 14:55:46 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:13.050 14:55:46 -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:13.050 14:55:46 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:13.050 14:55:46 -- common/autotest_common.sh@10 -- # set +x 00:15:13.050 [2024-12-01 14:55:46.018202] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:13.050 14:55:46 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:13.050 14:55:46 -- target/bdev_io_wait.sh@28 -- # WRITE_PID=85004 00:15:13.050 14:55:46 -- target/bdev_io_wait.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:15:13.050 14:55:46 -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:15:13.050 14:55:46 -- nvmf/common.sh@520 -- # config=() 00:15:13.050 14:55:46 -- target/bdev_io_wait.sh@30 -- # READ_PID=85006 00:15:13.050 14:55:46 -- nvmf/common.sh@520 -- # local subsystem config 00:15:13.050 14:55:46 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:15:13.050 14:55:46 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:15:13.050 { 00:15:13.050 "params": { 00:15:13.050 "name": "Nvme$subsystem", 00:15:13.050 "trtype": "$TEST_TRANSPORT", 00:15:13.050 "traddr": "$NVMF_FIRST_TARGET_IP", 00:15:13.050 "adrfam": "ipv4", 00:15:13.050 "trsvcid": "$NVMF_PORT", 00:15:13.050 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:15:13.050 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:15:13.050 "hdgst": ${hdgst:-false}, 00:15:13.050 "ddgst": ${ddgst:-false} 00:15:13.050 }, 00:15:13.050 "method": "bdev_nvme_attach_controller" 00:15:13.050 } 00:15:13.050 EOF 00:15:13.050 )") 00:15:13.050 14:55:46 -- target/bdev_io_wait.sh@29 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:15:13.050 14:55:46 -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:15:13.050 14:55:46 -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=85008 00:15:13.050 14:55:46 -- nvmf/common.sh@520 -- # config=() 00:15:13.050 14:55:46 -- nvmf/common.sh@520 -- # local subsystem config 00:15:13.050 14:55:46 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:15:13.050 14:55:46 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:15:13.050 { 00:15:13.050 "params": { 00:15:13.050 "name": "Nvme$subsystem", 00:15:13.050 "trtype": "$TEST_TRANSPORT", 00:15:13.050 "traddr": "$NVMF_FIRST_TARGET_IP", 00:15:13.050 "adrfam": "ipv4", 00:15:13.050 "trsvcid": "$NVMF_PORT", 00:15:13.050 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:15:13.050 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:15:13.050 "hdgst": ${hdgst:-false}, 00:15:13.050 "ddgst": ${ddgst:-false} 00:15:13.050 }, 00:15:13.050 "method": "bdev_nvme_attach_controller" 00:15:13.050 } 00:15:13.050 EOF 00:15:13.050 )") 00:15:13.050 14:55:46 -- nvmf/common.sh@542 -- # cat 00:15:13.050 14:55:46 -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=85010 00:15:13.050 14:55:46 -- target/bdev_io_wait.sh@35 -- # sync 00:15:13.050 14:55:46 -- target/bdev_io_wait.sh@33 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:15:13.050 14:55:46 -- nvmf/common.sh@542 -- # cat 00:15:13.050 14:55:46 -- target/bdev_io_wait.sh@31 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:15:13.050 14:55:46 -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:15:13.050 14:55:46 -- nvmf/common.sh@520 -- # config=() 00:15:13.050 14:55:46 -- nvmf/common.sh@520 -- # local subsystem config 00:15:13.050 14:55:46 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:15:13.050 14:55:46 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:15:13.050 { 00:15:13.050 "params": { 00:15:13.050 "name": "Nvme$subsystem", 00:15:13.050 "trtype": "$TEST_TRANSPORT", 00:15:13.050 "traddr": "$NVMF_FIRST_TARGET_IP", 00:15:13.050 "adrfam": "ipv4", 00:15:13.050 "trsvcid": "$NVMF_PORT", 00:15:13.050 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:15:13.050 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:15:13.050 "hdgst": ${hdgst:-false}, 00:15:13.050 "ddgst": ${ddgst:-false} 00:15:13.050 }, 00:15:13.050 "method": "bdev_nvme_attach_controller" 00:15:13.050 } 00:15:13.050 EOF 00:15:13.050 )") 00:15:13.050 14:55:46 -- nvmf/common.sh@544 -- # jq . 00:15:13.050 14:55:46 -- nvmf/common.sh@544 -- # jq . 00:15:13.050 14:55:46 -- nvmf/common.sh@545 -- # IFS=, 00:15:13.050 14:55:46 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:15:13.050 "params": { 00:15:13.050 "name": "Nvme1", 00:15:13.050 "trtype": "tcp", 00:15:13.050 "traddr": "10.0.0.2", 00:15:13.050 "adrfam": "ipv4", 00:15:13.050 "trsvcid": "4420", 00:15:13.050 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:15:13.050 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:15:13.050 "hdgst": false, 00:15:13.050 "ddgst": false 00:15:13.050 }, 00:15:13.050 "method": "bdev_nvme_attach_controller" 00:15:13.050 }' 00:15:13.050 14:55:46 -- nvmf/common.sh@542 -- # cat 00:15:13.050 14:55:46 -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:15:13.050 14:55:46 -- nvmf/common.sh@520 -- # config=() 00:15:13.050 14:55:46 -- nvmf/common.sh@520 -- # local subsystem config 00:15:13.050 14:55:46 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:15:13.050 14:55:46 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:15:13.050 { 00:15:13.050 "params": { 00:15:13.050 "name": "Nvme$subsystem", 00:15:13.050 "trtype": "$TEST_TRANSPORT", 00:15:13.050 "traddr": "$NVMF_FIRST_TARGET_IP", 00:15:13.050 "adrfam": "ipv4", 00:15:13.050 "trsvcid": "$NVMF_PORT", 00:15:13.050 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:15:13.050 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:15:13.050 "hdgst": ${hdgst:-false}, 00:15:13.050 "ddgst": ${ddgst:-false} 00:15:13.050 }, 00:15:13.050 "method": "bdev_nvme_attach_controller" 00:15:13.050 } 00:15:13.050 EOF 00:15:13.050 )") 00:15:13.050 14:55:46 -- nvmf/common.sh@545 -- # IFS=, 00:15:13.050 14:55:46 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:15:13.050 "params": { 00:15:13.050 "name": "Nvme1", 00:15:13.051 "trtype": "tcp", 00:15:13.051 "traddr": "10.0.0.2", 00:15:13.051 "adrfam": "ipv4", 00:15:13.051 "trsvcid": "4420", 00:15:13.051 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:15:13.051 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:15:13.051 "hdgst": false, 00:15:13.051 "ddgst": false 00:15:13.051 }, 00:15:13.051 "method": "bdev_nvme_attach_controller" 00:15:13.051 }' 00:15:13.051 14:55:46 -- nvmf/common.sh@542 -- # cat 00:15:13.051 14:55:46 -- nvmf/common.sh@544 -- # jq . 00:15:13.051 14:55:46 -- nvmf/common.sh@545 -- # IFS=, 00:15:13.051 14:55:46 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:15:13.051 "params": { 00:15:13.051 "name": "Nvme1", 00:15:13.051 "trtype": "tcp", 00:15:13.051 "traddr": "10.0.0.2", 00:15:13.051 "adrfam": "ipv4", 00:15:13.051 "trsvcid": "4420", 00:15:13.051 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:15:13.051 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:15:13.051 "hdgst": false, 00:15:13.051 "ddgst": false 00:15:13.051 }, 00:15:13.051 "method": "bdev_nvme_attach_controller" 00:15:13.051 }' 00:15:13.051 14:55:46 -- nvmf/common.sh@544 -- # jq . 00:15:13.051 14:55:46 -- nvmf/common.sh@545 -- # IFS=, 00:15:13.051 14:55:46 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:15:13.051 "params": { 00:15:13.051 "name": "Nvme1", 00:15:13.051 "trtype": "tcp", 00:15:13.051 "traddr": "10.0.0.2", 00:15:13.051 "adrfam": "ipv4", 00:15:13.051 "trsvcid": "4420", 00:15:13.051 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:15:13.051 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:15:13.051 "hdgst": false, 00:15:13.051 "ddgst": false 00:15:13.051 }, 00:15:13.051 "method": "bdev_nvme_attach_controller" 00:15:13.051 }' 00:15:13.051 [2024-12-01 14:55:46.087994] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:15:13.051 [2024-12-01 14:55:46.088095] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:15:13.051 [2024-12-01 14:55:46.097228] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:15:13.051 14:55:46 -- target/bdev_io_wait.sh@37 -- # wait 85004 00:15:13.051 [2024-12-01 14:55:46.098142] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:15:13.051 [2024-12-01 14:55:46.098484] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:15:13.051 [2024-12-01 14:55:46.098558] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:15:13.051 [2024-12-01 14:55:46.106452] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:15:13.051 [2024-12-01 14:55:46.106530] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:15:13.310 [2024-12-01 14:55:46.327565] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:13.568 [2024-12-01 14:55:46.424353] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:13.568 [2024-12-01 14:55:46.426378] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:15:13.568 [2024-12-01 14:55:46.521943] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:15:13.568 [2024-12-01 14:55:46.523869] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:13.568 [2024-12-01 14:55:46.621824] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 7 00:15:13.568 [2024-12-01 14:55:46.636125] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:13.827 Running I/O for 1 seconds... 00:15:13.827 [2024-12-01 14:55:46.733175] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:15:13.827 Running I/O for 1 seconds... 00:15:13.827 Running I/O for 1 seconds... 00:15:13.827 Running I/O for 1 seconds... 00:15:14.762 00:15:14.762 Latency(us) 00:15:14.762 [2024-12-01T14:55:47.877Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:14.762 [2024-12-01T14:55:47.877Z] Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:15:14.762 Nvme1n1 : 1.00 213474.31 833.88 0.00 0.00 597.26 220.63 1333.06 00:15:14.762 [2024-12-01T14:55:47.877Z] =================================================================================================================== 00:15:14.762 [2024-12-01T14:55:47.877Z] Total : 213474.31 833.88 0.00 0.00 597.26 220.63 1333.06 00:15:14.762 00:15:14.762 Latency(us) 00:15:14.762 [2024-12-01T14:55:47.877Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:14.762 [2024-12-01T14:55:47.877Z] Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:15:14.762 Nvme1n1 : 1.01 7718.25 30.15 0.00 0.00 16519.83 7060.01 21924.77 00:15:14.762 [2024-12-01T14:55:47.877Z] =================================================================================================================== 00:15:14.762 [2024-12-01T14:55:47.877Z] Total : 7718.25 30.15 0.00 0.00 16519.83 7060.01 21924.77 00:15:14.762 00:15:14.762 Latency(us) 00:15:14.762 [2024-12-01T14:55:47.877Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:14.762 [2024-12-01T14:55:47.877Z] Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:15:14.762 Nvme1n1 : 1.01 8098.06 31.63 0.00 0.00 15749.15 6315.29 27048.49 00:15:14.762 [2024-12-01T14:55:47.877Z] =================================================================================================================== 00:15:14.762 [2024-12-01T14:55:47.877Z] Total : 8098.06 31.63 0.00 0.00 15749.15 6315.29 27048.49 00:15:15.020 00:15:15.020 Latency(us) 00:15:15.020 [2024-12-01T14:55:48.135Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:15.020 [2024-12-01T14:55:48.135Z] Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:15:15.020 Nvme1n1 : 1.01 7683.74 30.01 0.00 0.00 16591.75 3276.80 29193.31 00:15:15.020 [2024-12-01T14:55:48.135Z] =================================================================================================================== 00:15:15.020 [2024-12-01T14:55:48.135Z] Total : 7683.74 30.01 0.00 0.00 16591.75 3276.80 29193.31 00:15:15.277 14:55:48 -- target/bdev_io_wait.sh@38 -- # wait 85006 00:15:15.277 14:55:48 -- target/bdev_io_wait.sh@39 -- # wait 85008 00:15:15.277 14:55:48 -- target/bdev_io_wait.sh@40 -- # wait 85010 00:15:15.277 14:55:48 -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:15.277 14:55:48 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:15.278 14:55:48 -- common/autotest_common.sh@10 -- # set +x 00:15:15.278 14:55:48 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:15.278 14:55:48 -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:15:15.278 14:55:48 -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:15:15.278 14:55:48 -- nvmf/common.sh@476 -- # nvmfcleanup 00:15:15.278 14:55:48 -- nvmf/common.sh@116 -- # sync 00:15:15.278 14:55:48 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:15:15.278 14:55:48 -- nvmf/common.sh@119 -- # set +e 00:15:15.278 14:55:48 -- nvmf/common.sh@120 -- # for i in {1..20} 00:15:15.278 14:55:48 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:15:15.278 rmmod nvme_tcp 00:15:15.536 rmmod nvme_fabrics 00:15:15.536 rmmod nvme_keyring 00:15:15.536 14:55:48 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:15:15.536 14:55:48 -- nvmf/common.sh@123 -- # set -e 00:15:15.536 14:55:48 -- nvmf/common.sh@124 -- # return 0 00:15:15.536 14:55:48 -- nvmf/common.sh@477 -- # '[' -n 84959 ']' 00:15:15.536 14:55:48 -- nvmf/common.sh@478 -- # killprocess 84959 00:15:15.536 14:55:48 -- common/autotest_common.sh@936 -- # '[' -z 84959 ']' 00:15:15.536 14:55:48 -- common/autotest_common.sh@940 -- # kill -0 84959 00:15:15.536 14:55:48 -- common/autotest_common.sh@941 -- # uname 00:15:15.536 14:55:48 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:15:15.536 14:55:48 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 84959 00:15:15.536 14:55:48 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:15:15.536 killing process with pid 84959 00:15:15.536 14:55:48 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:15:15.536 14:55:48 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 84959' 00:15:15.536 14:55:48 -- common/autotest_common.sh@955 -- # kill 84959 00:15:15.536 14:55:48 -- common/autotest_common.sh@960 -- # wait 84959 00:15:15.536 14:55:48 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:15:15.536 14:55:48 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:15:15.536 14:55:48 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:15:15.536 14:55:48 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:15.537 14:55:48 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:15:15.537 14:55:48 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:15.537 14:55:48 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:15.537 14:55:48 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:15.795 14:55:48 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:15:15.795 00:15:15.795 real 0m3.729s 00:15:15.795 user 0m16.748s 00:15:15.795 sys 0m2.329s 00:15:15.795 14:55:48 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:15:15.795 14:55:48 -- common/autotest_common.sh@10 -- # set +x 00:15:15.795 ************************************ 00:15:15.795 END TEST nvmf_bdev_io_wait 00:15:15.795 ************************************ 00:15:15.795 14:55:48 -- nvmf/nvmf.sh@50 -- # run_test nvmf_queue_depth /home/vagrant/spdk_repo/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:15:15.795 14:55:48 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:15:15.795 14:55:48 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:15:15.795 14:55:48 -- common/autotest_common.sh@10 -- # set +x 00:15:15.795 ************************************ 00:15:15.795 START TEST nvmf_queue_depth 00:15:15.795 ************************************ 00:15:15.795 14:55:48 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:15:15.795 * Looking for test storage... 00:15:15.795 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:15:15.796 14:55:48 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:15:15.796 14:55:48 -- common/autotest_common.sh@1690 -- # lcov --version 00:15:15.796 14:55:48 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:15:15.796 14:55:48 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:15:15.796 14:55:48 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:15:15.796 14:55:48 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:15:15.796 14:55:48 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:15:15.796 14:55:48 -- scripts/common.sh@335 -- # IFS=.-: 00:15:15.796 14:55:48 -- scripts/common.sh@335 -- # read -ra ver1 00:15:15.796 14:55:48 -- scripts/common.sh@336 -- # IFS=.-: 00:15:15.796 14:55:48 -- scripts/common.sh@336 -- # read -ra ver2 00:15:15.796 14:55:48 -- scripts/common.sh@337 -- # local 'op=<' 00:15:15.796 14:55:48 -- scripts/common.sh@339 -- # ver1_l=2 00:15:15.796 14:55:48 -- scripts/common.sh@340 -- # ver2_l=1 00:15:15.796 14:55:48 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:15:15.796 14:55:48 -- scripts/common.sh@343 -- # case "$op" in 00:15:15.796 14:55:48 -- scripts/common.sh@344 -- # : 1 00:15:15.796 14:55:48 -- scripts/common.sh@363 -- # (( v = 0 )) 00:15:15.796 14:55:48 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:15.796 14:55:48 -- scripts/common.sh@364 -- # decimal 1 00:15:15.796 14:55:48 -- scripts/common.sh@352 -- # local d=1 00:15:15.796 14:55:48 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:15.796 14:55:48 -- scripts/common.sh@354 -- # echo 1 00:15:15.796 14:55:48 -- scripts/common.sh@364 -- # ver1[v]=1 00:15:15.796 14:55:48 -- scripts/common.sh@365 -- # decimal 2 00:15:15.796 14:55:48 -- scripts/common.sh@352 -- # local d=2 00:15:15.796 14:55:48 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:15.796 14:55:48 -- scripts/common.sh@354 -- # echo 2 00:15:15.796 14:55:48 -- scripts/common.sh@365 -- # ver2[v]=2 00:15:15.796 14:55:48 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:15:15.796 14:55:48 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:15:15.796 14:55:48 -- scripts/common.sh@367 -- # return 0 00:15:15.796 14:55:48 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:15.796 14:55:48 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:15:15.796 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:15.796 --rc genhtml_branch_coverage=1 00:15:15.796 --rc genhtml_function_coverage=1 00:15:15.796 --rc genhtml_legend=1 00:15:15.796 --rc geninfo_all_blocks=1 00:15:15.796 --rc geninfo_unexecuted_blocks=1 00:15:15.796 00:15:15.796 ' 00:15:15.796 14:55:48 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:15:15.796 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:15.796 --rc genhtml_branch_coverage=1 00:15:15.796 --rc genhtml_function_coverage=1 00:15:15.796 --rc genhtml_legend=1 00:15:15.796 --rc geninfo_all_blocks=1 00:15:15.796 --rc geninfo_unexecuted_blocks=1 00:15:15.796 00:15:15.796 ' 00:15:15.796 14:55:48 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:15:15.796 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:15.796 --rc genhtml_branch_coverage=1 00:15:15.796 --rc genhtml_function_coverage=1 00:15:15.796 --rc genhtml_legend=1 00:15:15.796 --rc geninfo_all_blocks=1 00:15:15.796 --rc geninfo_unexecuted_blocks=1 00:15:15.796 00:15:15.796 ' 00:15:15.796 14:55:48 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:15:15.796 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:15.796 --rc genhtml_branch_coverage=1 00:15:15.796 --rc genhtml_function_coverage=1 00:15:15.796 --rc genhtml_legend=1 00:15:15.796 --rc geninfo_all_blocks=1 00:15:15.796 --rc geninfo_unexecuted_blocks=1 00:15:15.796 00:15:15.796 ' 00:15:15.796 14:55:48 -- target/queue_depth.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:15:15.796 14:55:48 -- nvmf/common.sh@7 -- # uname -s 00:15:15.796 14:55:48 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:15.796 14:55:48 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:15.796 14:55:48 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:15.796 14:55:48 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:15.796 14:55:48 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:15.796 14:55:48 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:15.796 14:55:48 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:15.796 14:55:48 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:15.796 14:55:48 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:15.796 14:55:48 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:15.796 14:55:48 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:2d843004-a791-47f3-8dd7-3d04462c368b 00:15:15.796 14:55:48 -- nvmf/common.sh@18 -- # NVME_HOSTID=2d843004-a791-47f3-8dd7-3d04462c368b 00:15:15.796 14:55:48 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:15.796 14:55:48 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:15.796 14:55:48 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:15:15.796 14:55:48 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:15:15.796 14:55:48 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:15.796 14:55:48 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:15.796 14:55:48 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:15.796 14:55:48 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:15.796 14:55:48 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:15.796 14:55:48 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:15.796 14:55:48 -- paths/export.sh@5 -- # export PATH 00:15:15.796 14:55:48 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:15.796 14:55:48 -- nvmf/common.sh@46 -- # : 0 00:15:15.796 14:55:48 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:15:15.796 14:55:48 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:15:15.796 14:55:48 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:15:15.796 14:55:48 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:15.796 14:55:48 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:15.796 14:55:48 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:15:15.796 14:55:48 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:15:15.796 14:55:48 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:15:16.055 14:55:48 -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:15:16.055 14:55:48 -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:15:16.055 14:55:48 -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:15:16.055 14:55:48 -- target/queue_depth.sh@19 -- # nvmftestinit 00:15:16.055 14:55:48 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:15:16.055 14:55:48 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:16.055 14:55:48 -- nvmf/common.sh@436 -- # prepare_net_devs 00:15:16.055 14:55:48 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:15:16.055 14:55:48 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:15:16.055 14:55:48 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:16.055 14:55:48 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:16.055 14:55:48 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:16.055 14:55:48 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:15:16.055 14:55:48 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:15:16.055 14:55:48 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:15:16.055 14:55:48 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:15:16.055 14:55:48 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:15:16.055 14:55:48 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:15:16.055 14:55:48 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:16.055 14:55:48 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:16.055 14:55:48 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:15:16.055 14:55:48 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:15:16.055 14:55:48 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:15:16.055 14:55:48 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:15:16.055 14:55:48 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:15:16.055 14:55:48 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:16.055 14:55:48 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:15:16.055 14:55:48 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:15:16.055 14:55:48 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:15:16.055 14:55:48 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:15:16.055 14:55:48 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:15:16.055 14:55:48 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:15:16.055 Cannot find device "nvmf_tgt_br" 00:15:16.055 14:55:48 -- nvmf/common.sh@154 -- # true 00:15:16.055 14:55:48 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:15:16.055 Cannot find device "nvmf_tgt_br2" 00:15:16.055 14:55:48 -- nvmf/common.sh@155 -- # true 00:15:16.055 14:55:48 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:15:16.055 14:55:48 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:15:16.055 Cannot find device "nvmf_tgt_br" 00:15:16.055 14:55:48 -- nvmf/common.sh@157 -- # true 00:15:16.055 14:55:48 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:15:16.055 Cannot find device "nvmf_tgt_br2" 00:15:16.055 14:55:48 -- nvmf/common.sh@158 -- # true 00:15:16.055 14:55:48 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:15:16.055 14:55:49 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:15:16.055 14:55:49 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:15:16.055 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:16.055 14:55:49 -- nvmf/common.sh@161 -- # true 00:15:16.055 14:55:49 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:15:16.055 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:16.055 14:55:49 -- nvmf/common.sh@162 -- # true 00:15:16.055 14:55:49 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:15:16.055 14:55:49 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:15:16.055 14:55:49 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:15:16.055 14:55:49 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:15:16.055 14:55:49 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:15:16.055 14:55:49 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:15:16.056 14:55:49 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:15:16.056 14:55:49 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:15:16.056 14:55:49 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:15:16.056 14:55:49 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:15:16.056 14:55:49 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:15:16.056 14:55:49 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:15:16.056 14:55:49 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:15:16.056 14:55:49 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:15:16.056 14:55:49 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:15:16.314 14:55:49 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:15:16.314 14:55:49 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:15:16.314 14:55:49 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:15:16.314 14:55:49 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:15:16.314 14:55:49 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:15:16.314 14:55:49 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:15:16.314 14:55:49 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:15:16.314 14:55:49 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:15:16.314 14:55:49 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:15:16.314 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:16.314 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.077 ms 00:15:16.314 00:15:16.314 --- 10.0.0.2 ping statistics --- 00:15:16.314 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:16.314 rtt min/avg/max/mdev = 0.077/0.077/0.077/0.000 ms 00:15:16.314 14:55:49 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:15:16.314 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:15:16.314 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.038 ms 00:15:16.314 00:15:16.314 --- 10.0.0.3 ping statistics --- 00:15:16.314 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:16.314 rtt min/avg/max/mdev = 0.038/0.038/0.038/0.000 ms 00:15:16.314 14:55:49 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:15:16.314 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:16.314 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.024 ms 00:15:16.314 00:15:16.314 --- 10.0.0.1 ping statistics --- 00:15:16.314 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:16.314 rtt min/avg/max/mdev = 0.024/0.024/0.024/0.000 ms 00:15:16.314 14:55:49 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:16.314 14:55:49 -- nvmf/common.sh@421 -- # return 0 00:15:16.314 14:55:49 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:15:16.314 14:55:49 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:16.314 14:55:49 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:15:16.314 14:55:49 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:15:16.314 14:55:49 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:16.314 14:55:49 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:15:16.314 14:55:49 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:15:16.314 14:55:49 -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:15:16.314 14:55:49 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:15:16.314 14:55:49 -- common/autotest_common.sh@722 -- # xtrace_disable 00:15:16.314 14:55:49 -- common/autotest_common.sh@10 -- # set +x 00:15:16.314 14:55:49 -- nvmf/common.sh@469 -- # nvmfpid=85226 00:15:16.314 14:55:49 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:15:16.314 14:55:49 -- nvmf/common.sh@470 -- # waitforlisten 85226 00:15:16.314 14:55:49 -- common/autotest_common.sh@829 -- # '[' -z 85226 ']' 00:15:16.314 14:55:49 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:16.314 14:55:49 -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:16.314 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:16.314 14:55:49 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:16.314 14:55:49 -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:16.314 14:55:49 -- common/autotest_common.sh@10 -- # set +x 00:15:16.314 [2024-12-01 14:55:49.331483] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:15:16.314 [2024-12-01 14:55:49.331581] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:16.573 [2024-12-01 14:55:49.466236] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:16.573 [2024-12-01 14:55:49.522529] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:15:16.573 [2024-12-01 14:55:49.522660] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:16.573 [2024-12-01 14:55:49.522674] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:16.573 [2024-12-01 14:55:49.522682] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:16.573 [2024-12-01 14:55:49.522707] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:15:17.508 14:55:50 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:17.508 14:55:50 -- common/autotest_common.sh@862 -- # return 0 00:15:17.508 14:55:50 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:15:17.508 14:55:50 -- common/autotest_common.sh@728 -- # xtrace_disable 00:15:17.508 14:55:50 -- common/autotest_common.sh@10 -- # set +x 00:15:17.508 14:55:50 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:17.508 14:55:50 -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:15:17.508 14:55:50 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:17.508 14:55:50 -- common/autotest_common.sh@10 -- # set +x 00:15:17.508 [2024-12-01 14:55:50.402596] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:17.508 14:55:50 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:17.508 14:55:50 -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:15:17.508 14:55:50 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:17.508 14:55:50 -- common/autotest_common.sh@10 -- # set +x 00:15:17.508 Malloc0 00:15:17.508 14:55:50 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:17.508 14:55:50 -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:15:17.508 14:55:50 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:17.508 14:55:50 -- common/autotest_common.sh@10 -- # set +x 00:15:17.508 14:55:50 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:17.508 14:55:50 -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:15:17.508 14:55:50 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:17.508 14:55:50 -- common/autotest_common.sh@10 -- # set +x 00:15:17.508 14:55:50 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:17.508 14:55:50 -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:17.508 14:55:50 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:17.508 14:55:50 -- common/autotest_common.sh@10 -- # set +x 00:15:17.508 [2024-12-01 14:55:50.464510] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:17.508 14:55:50 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:17.508 14:55:50 -- target/queue_depth.sh@30 -- # bdevperf_pid=85276 00:15:17.508 14:55:50 -- target/queue_depth.sh@29 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:15:17.508 14:55:50 -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:15:17.508 14:55:50 -- target/queue_depth.sh@33 -- # waitforlisten 85276 /var/tmp/bdevperf.sock 00:15:17.508 14:55:50 -- common/autotest_common.sh@829 -- # '[' -z 85276 ']' 00:15:17.508 14:55:50 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:15:17.508 14:55:50 -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:17.508 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:15:17.508 14:55:50 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:15:17.508 14:55:50 -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:17.508 14:55:50 -- common/autotest_common.sh@10 -- # set +x 00:15:17.508 [2024-12-01 14:55:50.526073] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:15:17.508 [2024-12-01 14:55:50.526682] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85276 ] 00:15:17.766 [2024-12-01 14:55:50.669480] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:17.766 [2024-12-01 14:55:50.760451] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:18.701 14:55:51 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:18.701 14:55:51 -- common/autotest_common.sh@862 -- # return 0 00:15:18.701 14:55:51 -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:15:18.701 14:55:51 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:18.701 14:55:51 -- common/autotest_common.sh@10 -- # set +x 00:15:18.701 NVMe0n1 00:15:18.701 14:55:51 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:18.701 14:55:51 -- target/queue_depth.sh@35 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:15:18.701 Running I/O for 10 seconds... 00:15:28.689 00:15:28.689 Latency(us) 00:15:28.689 [2024-12-01T14:56:01.804Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:28.689 [2024-12-01T14:56:01.804Z] Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:15:28.689 Verification LBA range: start 0x0 length 0x4000 00:15:28.689 NVMe0n1 : 10.05 17181.09 67.11 0.00 0.00 59415.70 12630.57 45279.42 00:15:28.689 [2024-12-01T14:56:01.804Z] =================================================================================================================== 00:15:28.689 [2024-12-01T14:56:01.804Z] Total : 17181.09 67.11 0.00 0.00 59415.70 12630.57 45279.42 00:15:28.689 0 00:15:28.689 14:56:01 -- target/queue_depth.sh@39 -- # killprocess 85276 00:15:28.689 14:56:01 -- common/autotest_common.sh@936 -- # '[' -z 85276 ']' 00:15:28.689 14:56:01 -- common/autotest_common.sh@940 -- # kill -0 85276 00:15:28.689 14:56:01 -- common/autotest_common.sh@941 -- # uname 00:15:28.689 14:56:01 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:15:28.689 14:56:01 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 85276 00:15:28.689 14:56:01 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:15:28.689 14:56:01 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:15:28.689 killing process with pid 85276 00:15:28.689 14:56:01 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 85276' 00:15:28.690 Received shutdown signal, test time was about 10.000000 seconds 00:15:28.690 00:15:28.690 Latency(us) 00:15:28.690 [2024-12-01T14:56:01.805Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:28.690 [2024-12-01T14:56:01.805Z] =================================================================================================================== 00:15:28.690 [2024-12-01T14:56:01.805Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:15:28.690 14:56:01 -- common/autotest_common.sh@955 -- # kill 85276 00:15:28.690 14:56:01 -- common/autotest_common.sh@960 -- # wait 85276 00:15:28.948 14:56:02 -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:15:28.948 14:56:02 -- target/queue_depth.sh@43 -- # nvmftestfini 00:15:28.948 14:56:02 -- nvmf/common.sh@476 -- # nvmfcleanup 00:15:28.948 14:56:02 -- nvmf/common.sh@116 -- # sync 00:15:29.211 14:56:02 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:15:29.211 14:56:02 -- nvmf/common.sh@119 -- # set +e 00:15:29.211 14:56:02 -- nvmf/common.sh@120 -- # for i in {1..20} 00:15:29.211 14:56:02 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:15:29.211 rmmod nvme_tcp 00:15:29.211 rmmod nvme_fabrics 00:15:29.211 rmmod nvme_keyring 00:15:29.211 14:56:02 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:15:29.211 14:56:02 -- nvmf/common.sh@123 -- # set -e 00:15:29.211 14:56:02 -- nvmf/common.sh@124 -- # return 0 00:15:29.211 14:56:02 -- nvmf/common.sh@477 -- # '[' -n 85226 ']' 00:15:29.211 14:56:02 -- nvmf/common.sh@478 -- # killprocess 85226 00:15:29.211 14:56:02 -- common/autotest_common.sh@936 -- # '[' -z 85226 ']' 00:15:29.211 14:56:02 -- common/autotest_common.sh@940 -- # kill -0 85226 00:15:29.211 14:56:02 -- common/autotest_common.sh@941 -- # uname 00:15:29.211 14:56:02 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:15:29.211 14:56:02 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 85226 00:15:29.211 14:56:02 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:15:29.211 14:56:02 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:15:29.211 killing process with pid 85226 00:15:29.211 14:56:02 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 85226' 00:15:29.211 14:56:02 -- common/autotest_common.sh@955 -- # kill 85226 00:15:29.211 14:56:02 -- common/autotest_common.sh@960 -- # wait 85226 00:15:29.470 14:56:02 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:15:29.470 14:56:02 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:15:29.470 14:56:02 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:15:29.470 14:56:02 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:29.470 14:56:02 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:15:29.470 14:56:02 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:29.470 14:56:02 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:29.470 14:56:02 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:29.470 14:56:02 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:15:29.470 00:15:29.470 real 0m13.836s 00:15:29.470 user 0m22.994s 00:15:29.470 sys 0m2.522s 00:15:29.470 14:56:02 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:15:29.470 14:56:02 -- common/autotest_common.sh@10 -- # set +x 00:15:29.470 ************************************ 00:15:29.470 END TEST nvmf_queue_depth 00:15:29.470 ************************************ 00:15:29.729 14:56:02 -- nvmf/nvmf.sh@51 -- # run_test nvmf_multipath /home/vagrant/spdk_repo/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:15:29.729 14:56:02 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:15:29.729 14:56:02 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:15:29.729 14:56:02 -- common/autotest_common.sh@10 -- # set +x 00:15:29.729 ************************************ 00:15:29.729 START TEST nvmf_multipath 00:15:29.729 ************************************ 00:15:29.729 14:56:02 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:15:29.729 * Looking for test storage... 00:15:29.729 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:15:29.729 14:56:02 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:15:29.729 14:56:02 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:15:29.729 14:56:02 -- common/autotest_common.sh@1690 -- # lcov --version 00:15:29.729 14:56:02 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:15:29.729 14:56:02 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:15:29.729 14:56:02 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:15:29.729 14:56:02 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:15:29.729 14:56:02 -- scripts/common.sh@335 -- # IFS=.-: 00:15:29.729 14:56:02 -- scripts/common.sh@335 -- # read -ra ver1 00:15:29.729 14:56:02 -- scripts/common.sh@336 -- # IFS=.-: 00:15:29.729 14:56:02 -- scripts/common.sh@336 -- # read -ra ver2 00:15:29.729 14:56:02 -- scripts/common.sh@337 -- # local 'op=<' 00:15:29.729 14:56:02 -- scripts/common.sh@339 -- # ver1_l=2 00:15:29.729 14:56:02 -- scripts/common.sh@340 -- # ver2_l=1 00:15:29.729 14:56:02 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:15:29.729 14:56:02 -- scripts/common.sh@343 -- # case "$op" in 00:15:29.730 14:56:02 -- scripts/common.sh@344 -- # : 1 00:15:29.730 14:56:02 -- scripts/common.sh@363 -- # (( v = 0 )) 00:15:29.730 14:56:02 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:29.730 14:56:02 -- scripts/common.sh@364 -- # decimal 1 00:15:29.730 14:56:02 -- scripts/common.sh@352 -- # local d=1 00:15:29.730 14:56:02 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:29.730 14:56:02 -- scripts/common.sh@354 -- # echo 1 00:15:29.730 14:56:02 -- scripts/common.sh@364 -- # ver1[v]=1 00:15:29.730 14:56:02 -- scripts/common.sh@365 -- # decimal 2 00:15:29.730 14:56:02 -- scripts/common.sh@352 -- # local d=2 00:15:29.730 14:56:02 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:29.730 14:56:02 -- scripts/common.sh@354 -- # echo 2 00:15:29.730 14:56:02 -- scripts/common.sh@365 -- # ver2[v]=2 00:15:29.730 14:56:02 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:15:29.730 14:56:02 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:15:29.730 14:56:02 -- scripts/common.sh@367 -- # return 0 00:15:29.730 14:56:02 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:29.730 14:56:02 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:15:29.730 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:29.730 --rc genhtml_branch_coverage=1 00:15:29.730 --rc genhtml_function_coverage=1 00:15:29.730 --rc genhtml_legend=1 00:15:29.730 --rc geninfo_all_blocks=1 00:15:29.730 --rc geninfo_unexecuted_blocks=1 00:15:29.730 00:15:29.730 ' 00:15:29.730 14:56:02 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:15:29.730 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:29.730 --rc genhtml_branch_coverage=1 00:15:29.730 --rc genhtml_function_coverage=1 00:15:29.730 --rc genhtml_legend=1 00:15:29.730 --rc geninfo_all_blocks=1 00:15:29.730 --rc geninfo_unexecuted_blocks=1 00:15:29.730 00:15:29.730 ' 00:15:29.730 14:56:02 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:15:29.730 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:29.730 --rc genhtml_branch_coverage=1 00:15:29.730 --rc genhtml_function_coverage=1 00:15:29.730 --rc genhtml_legend=1 00:15:29.730 --rc geninfo_all_blocks=1 00:15:29.730 --rc geninfo_unexecuted_blocks=1 00:15:29.730 00:15:29.730 ' 00:15:29.730 14:56:02 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:15:29.730 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:29.730 --rc genhtml_branch_coverage=1 00:15:29.730 --rc genhtml_function_coverage=1 00:15:29.730 --rc genhtml_legend=1 00:15:29.730 --rc geninfo_all_blocks=1 00:15:29.730 --rc geninfo_unexecuted_blocks=1 00:15:29.730 00:15:29.730 ' 00:15:29.730 14:56:02 -- target/multipath.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:15:29.730 14:56:02 -- nvmf/common.sh@7 -- # uname -s 00:15:29.730 14:56:02 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:29.730 14:56:02 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:29.730 14:56:02 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:29.730 14:56:02 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:29.730 14:56:02 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:29.730 14:56:02 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:29.730 14:56:02 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:29.730 14:56:02 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:29.730 14:56:02 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:29.730 14:56:02 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:29.730 14:56:02 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:2d843004-a791-47f3-8dd7-3d04462c368b 00:15:29.730 14:56:02 -- nvmf/common.sh@18 -- # NVME_HOSTID=2d843004-a791-47f3-8dd7-3d04462c368b 00:15:29.730 14:56:02 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:29.730 14:56:02 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:29.730 14:56:02 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:15:29.730 14:56:02 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:15:29.730 14:56:02 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:29.730 14:56:02 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:29.730 14:56:02 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:29.730 14:56:02 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:29.730 14:56:02 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:29.730 14:56:02 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:29.730 14:56:02 -- paths/export.sh@5 -- # export PATH 00:15:29.730 14:56:02 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:29.730 14:56:02 -- nvmf/common.sh@46 -- # : 0 00:15:29.730 14:56:02 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:15:29.730 14:56:02 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:15:29.730 14:56:02 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:15:29.730 14:56:02 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:29.730 14:56:02 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:29.730 14:56:02 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:15:29.730 14:56:02 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:15:29.730 14:56:02 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:15:29.730 14:56:02 -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:15:29.730 14:56:02 -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:15:29.730 14:56:02 -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:15:29.730 14:56:02 -- target/multipath.sh@15 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:29.730 14:56:02 -- target/multipath.sh@43 -- # nvmftestinit 00:15:29.730 14:56:02 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:15:29.730 14:56:02 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:29.730 14:56:02 -- nvmf/common.sh@436 -- # prepare_net_devs 00:15:29.730 14:56:02 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:15:29.730 14:56:02 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:15:29.730 14:56:02 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:29.730 14:56:02 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:29.730 14:56:02 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:29.730 14:56:02 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:15:29.730 14:56:02 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:15:29.730 14:56:02 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:15:29.730 14:56:02 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:15:29.730 14:56:02 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:15:29.730 14:56:02 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:15:29.730 14:56:02 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:29.730 14:56:02 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:29.730 14:56:02 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:15:29.730 14:56:02 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:15:29.730 14:56:02 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:15:29.730 14:56:02 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:15:29.730 14:56:02 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:15:29.730 14:56:02 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:29.730 14:56:02 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:15:29.730 14:56:02 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:15:29.730 14:56:02 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:15:29.730 14:56:02 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:15:29.730 14:56:02 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:15:29.730 14:56:02 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:15:29.989 Cannot find device "nvmf_tgt_br" 00:15:29.989 14:56:02 -- nvmf/common.sh@154 -- # true 00:15:29.989 14:56:02 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:15:29.989 Cannot find device "nvmf_tgt_br2" 00:15:29.989 14:56:02 -- nvmf/common.sh@155 -- # true 00:15:29.989 14:56:02 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:15:29.989 14:56:02 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:15:29.989 Cannot find device "nvmf_tgt_br" 00:15:29.989 14:56:02 -- nvmf/common.sh@157 -- # true 00:15:29.989 14:56:02 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:15:29.989 Cannot find device "nvmf_tgt_br2" 00:15:29.989 14:56:02 -- nvmf/common.sh@158 -- # true 00:15:29.989 14:56:02 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:15:29.989 14:56:02 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:15:29.989 14:56:02 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:15:29.989 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:29.989 14:56:02 -- nvmf/common.sh@161 -- # true 00:15:29.989 14:56:02 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:15:29.989 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:29.989 14:56:02 -- nvmf/common.sh@162 -- # true 00:15:29.989 14:56:02 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:15:29.989 14:56:02 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:15:29.989 14:56:02 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:15:29.989 14:56:02 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:15:29.989 14:56:02 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:15:29.989 14:56:02 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:15:29.989 14:56:03 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:15:29.989 14:56:03 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:15:29.989 14:56:03 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:15:29.989 14:56:03 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:15:29.989 14:56:03 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:15:29.989 14:56:03 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:15:29.989 14:56:03 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:15:29.989 14:56:03 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:15:29.989 14:56:03 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:15:29.989 14:56:03 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:15:29.989 14:56:03 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:15:29.989 14:56:03 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:15:29.989 14:56:03 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:15:30.249 14:56:03 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:15:30.249 14:56:03 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:15:30.249 14:56:03 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:15:30.249 14:56:03 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:15:30.249 14:56:03 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:15:30.249 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:30.249 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.067 ms 00:15:30.249 00:15:30.249 --- 10.0.0.2 ping statistics --- 00:15:30.249 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:30.249 rtt min/avg/max/mdev = 0.067/0.067/0.067/0.000 ms 00:15:30.249 14:56:03 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:15:30.249 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:15:30.249 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.053 ms 00:15:30.249 00:15:30.249 --- 10.0.0.3 ping statistics --- 00:15:30.249 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:30.249 rtt min/avg/max/mdev = 0.053/0.053/0.053/0.000 ms 00:15:30.249 14:56:03 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:15:30.249 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:30.249 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.022 ms 00:15:30.249 00:15:30.249 --- 10.0.0.1 ping statistics --- 00:15:30.249 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:30.249 rtt min/avg/max/mdev = 0.022/0.022/0.022/0.000 ms 00:15:30.249 14:56:03 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:30.249 14:56:03 -- nvmf/common.sh@421 -- # return 0 00:15:30.249 14:56:03 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:15:30.249 14:56:03 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:30.249 14:56:03 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:15:30.249 14:56:03 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:15:30.250 14:56:03 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:30.250 14:56:03 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:15:30.250 14:56:03 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:15:30.250 14:56:03 -- target/multipath.sh@45 -- # '[' -z 10.0.0.3 ']' 00:15:30.250 14:56:03 -- target/multipath.sh@51 -- # '[' tcp '!=' tcp ']' 00:15:30.250 14:56:03 -- target/multipath.sh@57 -- # nvmfappstart -m 0xF 00:15:30.250 14:56:03 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:15:30.250 14:56:03 -- common/autotest_common.sh@722 -- # xtrace_disable 00:15:30.250 14:56:03 -- common/autotest_common.sh@10 -- # set +x 00:15:30.250 14:56:03 -- nvmf/common.sh@469 -- # nvmfpid=85611 00:15:30.250 14:56:03 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:15:30.250 14:56:03 -- nvmf/common.sh@470 -- # waitforlisten 85611 00:15:30.250 14:56:03 -- common/autotest_common.sh@829 -- # '[' -z 85611 ']' 00:15:30.250 14:56:03 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:30.250 14:56:03 -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:30.250 14:56:03 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:30.250 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:30.250 14:56:03 -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:30.250 14:56:03 -- common/autotest_common.sh@10 -- # set +x 00:15:30.250 [2024-12-01 14:56:03.231179] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:15:30.250 [2024-12-01 14:56:03.231291] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:30.509 [2024-12-01 14:56:03.365813] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:15:30.509 [2024-12-01 14:56:03.442818] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:15:30.509 [2024-12-01 14:56:03.443019] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:30.509 [2024-12-01 14:56:03.443032] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:30.509 [2024-12-01 14:56:03.443040] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:30.509 [2024-12-01 14:56:03.443218] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:15:30.509 [2024-12-01 14:56:03.443389] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:15:30.509 [2024-12-01 14:56:03.444142] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:15:30.509 [2024-12-01 14:56:03.444202] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:31.446 14:56:04 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:31.446 14:56:04 -- common/autotest_common.sh@862 -- # return 0 00:15:31.446 14:56:04 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:15:31.446 14:56:04 -- common/autotest_common.sh@728 -- # xtrace_disable 00:15:31.446 14:56:04 -- common/autotest_common.sh@10 -- # set +x 00:15:31.446 14:56:04 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:31.446 14:56:04 -- target/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:15:31.705 [2024-12-01 14:56:04.575720] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:31.705 14:56:04 -- target/multipath.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:15:31.964 Malloc0 00:15:31.964 14:56:04 -- target/multipath.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME -r 00:15:32.223 14:56:05 -- target/multipath.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:15:32.481 14:56:05 -- target/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:32.481 [2024-12-01 14:56:05.577133] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:32.742 14:56:05 -- target/multipath.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:15:32.742 [2024-12-01 14:56:05.793342] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:15:32.742 14:56:05 -- target/multipath.sh@67 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:2d843004-a791-47f3-8dd7-3d04462c368b --hostid=2d843004-a791-47f3-8dd7-3d04462c368b -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 -g -G 00:15:33.002 14:56:06 -- target/multipath.sh@68 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:2d843004-a791-47f3-8dd7-3d04462c368b --hostid=2d843004-a791-47f3-8dd7-3d04462c368b -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4420 -g -G 00:15:33.262 14:56:06 -- target/multipath.sh@69 -- # waitforserial SPDKISFASTANDAWESOME 00:15:33.262 14:56:06 -- common/autotest_common.sh@1187 -- # local i=0 00:15:33.262 14:56:06 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:15:33.262 14:56:06 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:15:33.262 14:56:06 -- common/autotest_common.sh@1194 -- # sleep 2 00:15:35.167 14:56:08 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:15:35.167 14:56:08 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:15:35.167 14:56:08 -- common/autotest_common.sh@1196 -- # grep -c SPDKISFASTANDAWESOME 00:15:35.167 14:56:08 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:15:35.167 14:56:08 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:15:35.167 14:56:08 -- common/autotest_common.sh@1197 -- # return 0 00:15:35.167 14:56:08 -- target/multipath.sh@72 -- # get_subsystem nqn.2016-06.io.spdk:cnode1 SPDKISFASTANDAWESOME 00:15:35.167 14:56:08 -- target/multipath.sh@34 -- # local nqn=nqn.2016-06.io.spdk:cnode1 serial=SPDKISFASTANDAWESOME s 00:15:35.167 14:56:08 -- target/multipath.sh@36 -- # for s in /sys/class/nvme-subsystem/* 00:15:35.167 14:56:08 -- target/multipath.sh@37 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:15:35.167 14:56:08 -- target/multipath.sh@37 -- # [[ SPDKISFASTANDAWESOME == \S\P\D\K\I\S\F\A\S\T\A\N\D\A\W\E\S\O\M\E ]] 00:15:35.167 14:56:08 -- target/multipath.sh@38 -- # echo nvme-subsys0 00:15:35.167 14:56:08 -- target/multipath.sh@38 -- # return 0 00:15:35.167 14:56:08 -- target/multipath.sh@72 -- # subsystem=nvme-subsys0 00:15:35.167 14:56:08 -- target/multipath.sh@73 -- # paths=(/sys/class/nvme-subsystem/$subsystem/nvme*/nvme*c*) 00:15:35.167 14:56:08 -- target/multipath.sh@74 -- # paths=("${paths[@]##*/}") 00:15:35.425 14:56:08 -- target/multipath.sh@76 -- # (( 2 == 2 )) 00:15:35.425 14:56:08 -- target/multipath.sh@78 -- # p0=nvme0c0n1 00:15:35.425 14:56:08 -- target/multipath.sh@79 -- # p1=nvme0c1n1 00:15:35.425 14:56:08 -- target/multipath.sh@81 -- # check_ana_state nvme0c0n1 optimized 00:15:35.425 14:56:08 -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=optimized 00:15:35.425 14:56:08 -- target/multipath.sh@22 -- # local timeout=20 00:15:35.425 14:56:08 -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:15:35.425 14:56:08 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:15:35.425 14:56:08 -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:15:35.425 14:56:08 -- target/multipath.sh@82 -- # check_ana_state nvme0c1n1 optimized 00:15:35.425 14:56:08 -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=optimized 00:15:35.425 14:56:08 -- target/multipath.sh@22 -- # local timeout=20 00:15:35.425 14:56:08 -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:15:35.425 14:56:08 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:15:35.425 14:56:08 -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:15:35.425 14:56:08 -- target/multipath.sh@85 -- # echo numa 00:15:35.425 14:56:08 -- target/multipath.sh@88 -- # fio_pid=85753 00:15:35.425 14:56:08 -- target/multipath.sh@90 -- # sleep 1 00:15:35.425 14:56:08 -- target/multipath.sh@87 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randrw -r 6 -v 00:15:35.425 [global] 00:15:35.425 thread=1 00:15:35.425 invalidate=1 00:15:35.425 rw=randrw 00:15:35.425 time_based=1 00:15:35.425 runtime=6 00:15:35.425 ioengine=libaio 00:15:35.425 direct=1 00:15:35.425 bs=4096 00:15:35.425 iodepth=128 00:15:35.425 norandommap=0 00:15:35.425 numjobs=1 00:15:35.425 00:15:35.425 verify_dump=1 00:15:35.425 verify_backlog=512 00:15:35.425 verify_state_save=0 00:15:35.425 do_verify=1 00:15:35.425 verify=crc32c-intel 00:15:35.425 [job0] 00:15:35.425 filename=/dev/nvme0n1 00:15:35.425 Could not set queue depth (nvme0n1) 00:15:35.425 job0: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:15:35.425 fio-3.35 00:15:35.425 Starting 1 thread 00:15:36.408 14:56:09 -- target/multipath.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:15:36.675 14:56:09 -- target/multipath.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:15:36.934 14:56:09 -- target/multipath.sh@95 -- # check_ana_state nvme0c0n1 inaccessible 00:15:36.934 14:56:09 -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=inaccessible 00:15:36.934 14:56:09 -- target/multipath.sh@22 -- # local timeout=20 00:15:36.934 14:56:09 -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:15:36.934 14:56:09 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:15:36.934 14:56:09 -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:15:36.934 14:56:09 -- target/multipath.sh@96 -- # check_ana_state nvme0c1n1 non-optimized 00:15:36.934 14:56:09 -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=non-optimized 00:15:36.934 14:56:09 -- target/multipath.sh@22 -- # local timeout=20 00:15:36.934 14:56:09 -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:15:36.934 14:56:09 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:15:36.934 14:56:09 -- target/multipath.sh@25 -- # [[ optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:15:36.934 14:56:09 -- target/multipath.sh@25 -- # sleep 1s 00:15:37.870 14:56:10 -- target/multipath.sh@26 -- # (( timeout-- == 0 )) 00:15:37.870 14:56:10 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:15:37.870 14:56:10 -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:15:37.870 14:56:10 -- target/multipath.sh@98 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:15:38.128 14:56:11 -- target/multipath.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 00:15:38.386 14:56:11 -- target/multipath.sh@101 -- # check_ana_state nvme0c0n1 non-optimized 00:15:38.386 14:56:11 -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=non-optimized 00:15:38.386 14:56:11 -- target/multipath.sh@22 -- # local timeout=20 00:15:38.386 14:56:11 -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:15:38.386 14:56:11 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:15:38.386 14:56:11 -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:15:38.386 14:56:11 -- target/multipath.sh@102 -- # check_ana_state nvme0c1n1 inaccessible 00:15:38.386 14:56:11 -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=inaccessible 00:15:38.386 14:56:11 -- target/multipath.sh@22 -- # local timeout=20 00:15:38.386 14:56:11 -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:15:38.386 14:56:11 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:15:38.386 14:56:11 -- target/multipath.sh@25 -- # [[ non-optimized != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:15:38.386 14:56:11 -- target/multipath.sh@25 -- # sleep 1s 00:15:39.318 14:56:12 -- target/multipath.sh@26 -- # (( timeout-- == 0 )) 00:15:39.318 14:56:12 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:15:39.318 14:56:12 -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:15:39.318 14:56:12 -- target/multipath.sh@104 -- # wait 85753 00:15:41.857 00:15:41.857 job0: (groupid=0, jobs=1): err= 0: pid=85781: Sun Dec 1 14:56:14 2024 00:15:41.857 read: IOPS=13.0k, BW=50.6MiB/s (53.1MB/s)(304MiB/6003msec) 00:15:41.857 slat (usec): min=3, max=4970, avg=43.42, stdev=194.58 00:15:41.857 clat (usec): min=517, max=13771, avg=6762.96, stdev=1157.75 00:15:41.857 lat (usec): min=545, max=13799, avg=6806.38, stdev=1164.88 00:15:41.857 clat percentiles (usec): 00:15:41.857 | 1.00th=[ 3949], 5.00th=[ 5080], 10.00th=[ 5473], 20.00th=[ 5932], 00:15:41.857 | 30.00th=[ 6194], 40.00th=[ 6390], 50.00th=[ 6652], 60.00th=[ 6915], 00:15:41.857 | 70.00th=[ 7242], 80.00th=[ 7570], 90.00th=[ 8160], 95.00th=[ 8848], 00:15:41.857 | 99.00th=[10290], 99.50th=[10683], 99.90th=[11469], 99.95th=[11731], 00:15:41.857 | 99.99th=[12518] 00:15:41.857 bw ( KiB/s): min=15048, max=33736, per=52.12%, avg=27019.55, stdev=6134.77, samples=11 00:15:41.857 iops : min= 3762, max= 8434, avg=6754.82, stdev=1533.77, samples=11 00:15:41.857 write: IOPS=7508, BW=29.3MiB/s (30.8MB/s)(157MiB/5348msec); 0 zone resets 00:15:41.857 slat (usec): min=14, max=2175, avg=55.13, stdev=130.11 00:15:41.857 clat (usec): min=470, max=11966, avg=5853.39, stdev=995.95 00:15:41.857 lat (usec): min=501, max=12061, avg=5908.52, stdev=999.69 00:15:41.857 clat percentiles (usec): 00:15:41.857 | 1.00th=[ 3163], 5.00th=[ 4113], 10.00th=[ 4752], 20.00th=[ 5211], 00:15:41.857 | 30.00th=[ 5473], 40.00th=[ 5669], 50.00th=[ 5866], 60.00th=[ 6063], 00:15:41.857 | 70.00th=[ 6259], 80.00th=[ 6521], 90.00th=[ 6915], 95.00th=[ 7308], 00:15:41.857 | 99.00th=[ 8979], 99.50th=[ 9503], 99.90th=[10552], 99.95th=[10814], 00:15:41.857 | 99.99th=[11338] 00:15:41.857 bw ( KiB/s): min=15480, max=33072, per=89.78%, avg=26965.73, stdev=5813.16, samples=11 00:15:41.857 iops : min= 3870, max= 8268, avg=6741.36, stdev=1453.37, samples=11 00:15:41.857 lat (usec) : 500=0.01%, 750=0.01%, 1000=0.01% 00:15:41.857 lat (msec) : 2=0.05%, 4=2.18%, 10=96.76%, 20=0.99% 00:15:41.857 cpu : usr=5.91%, sys=25.49%, ctx=7234, majf=0, minf=78 00:15:41.857 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.6% 00:15:41.857 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:41.857 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:15:41.857 issued rwts: total=77794,40155,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:41.857 latency : target=0, window=0, percentile=100.00%, depth=128 00:15:41.857 00:15:41.857 Run status group 0 (all jobs): 00:15:41.857 READ: bw=50.6MiB/s (53.1MB/s), 50.6MiB/s-50.6MiB/s (53.1MB/s-53.1MB/s), io=304MiB (319MB), run=6003-6003msec 00:15:41.857 WRITE: bw=29.3MiB/s (30.8MB/s), 29.3MiB/s-29.3MiB/s (30.8MB/s-30.8MB/s), io=157MiB (164MB), run=5348-5348msec 00:15:41.857 00:15:41.857 Disk stats (read/write): 00:15:41.857 nvme0n1: ios=77111/39223, merge=0/0, ticks=484892/212535, in_queue=697427, util=98.66% 00:15:41.857 14:56:14 -- target/multipath.sh@106 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:15:41.857 14:56:14 -- target/multipath.sh@107 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n optimized 00:15:42.121 14:56:15 -- target/multipath.sh@109 -- # check_ana_state nvme0c0n1 optimized 00:15:42.121 14:56:15 -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=optimized 00:15:42.121 14:56:15 -- target/multipath.sh@22 -- # local timeout=20 00:15:42.121 14:56:15 -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:15:42.121 14:56:15 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:15:42.121 14:56:15 -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:15:42.121 14:56:15 -- target/multipath.sh@110 -- # check_ana_state nvme0c1n1 optimized 00:15:42.121 14:56:15 -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=optimized 00:15:42.121 14:56:15 -- target/multipath.sh@22 -- # local timeout=20 00:15:42.121 14:56:15 -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:15:42.121 14:56:15 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:15:42.121 14:56:15 -- target/multipath.sh@25 -- # [[ inaccessible != \o\p\t\i\m\i\z\e\d ]] 00:15:42.121 14:56:15 -- target/multipath.sh@25 -- # sleep 1s 00:15:43.096 14:56:16 -- target/multipath.sh@26 -- # (( timeout-- == 0 )) 00:15:43.096 14:56:16 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:15:43.096 14:56:16 -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:15:43.096 14:56:16 -- target/multipath.sh@113 -- # echo round-robin 00:15:43.096 14:56:16 -- target/multipath.sh@116 -- # fio_pid=85911 00:15:43.096 14:56:16 -- target/multipath.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randrw -r 6 -v 00:15:43.096 14:56:16 -- target/multipath.sh@118 -- # sleep 1 00:15:43.356 [global] 00:15:43.356 thread=1 00:15:43.356 invalidate=1 00:15:43.356 rw=randrw 00:15:43.356 time_based=1 00:15:43.356 runtime=6 00:15:43.356 ioengine=libaio 00:15:43.356 direct=1 00:15:43.356 bs=4096 00:15:43.356 iodepth=128 00:15:43.356 norandommap=0 00:15:43.356 numjobs=1 00:15:43.356 00:15:43.356 verify_dump=1 00:15:43.356 verify_backlog=512 00:15:43.356 verify_state_save=0 00:15:43.356 do_verify=1 00:15:43.356 verify=crc32c-intel 00:15:43.356 [job0] 00:15:43.356 filename=/dev/nvme0n1 00:15:43.356 Could not set queue depth (nvme0n1) 00:15:43.356 job0: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:15:43.356 fio-3.35 00:15:43.356 Starting 1 thread 00:15:44.292 14:56:17 -- target/multipath.sh@120 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:15:44.551 14:56:17 -- target/multipath.sh@121 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:15:44.810 14:56:17 -- target/multipath.sh@123 -- # check_ana_state nvme0c0n1 inaccessible 00:15:44.810 14:56:17 -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=inaccessible 00:15:44.810 14:56:17 -- target/multipath.sh@22 -- # local timeout=20 00:15:44.810 14:56:17 -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:15:44.810 14:56:17 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:15:44.810 14:56:17 -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:15:44.810 14:56:17 -- target/multipath.sh@124 -- # check_ana_state nvme0c1n1 non-optimized 00:15:44.810 14:56:17 -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=non-optimized 00:15:44.810 14:56:17 -- target/multipath.sh@22 -- # local timeout=20 00:15:44.810 14:56:17 -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:15:44.810 14:56:17 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:15:44.810 14:56:17 -- target/multipath.sh@25 -- # [[ optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:15:44.810 14:56:17 -- target/multipath.sh@25 -- # sleep 1s 00:15:45.746 14:56:18 -- target/multipath.sh@26 -- # (( timeout-- == 0 )) 00:15:45.746 14:56:18 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:15:45.747 14:56:18 -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:15:45.747 14:56:18 -- target/multipath.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:15:46.006 14:56:19 -- target/multipath.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 00:15:46.265 14:56:19 -- target/multipath.sh@129 -- # check_ana_state nvme0c0n1 non-optimized 00:15:46.265 14:56:19 -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=non-optimized 00:15:46.265 14:56:19 -- target/multipath.sh@22 -- # local timeout=20 00:15:46.265 14:56:19 -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:15:46.265 14:56:19 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:15:46.265 14:56:19 -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:15:46.265 14:56:19 -- target/multipath.sh@130 -- # check_ana_state nvme0c1n1 inaccessible 00:15:46.265 14:56:19 -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=inaccessible 00:15:46.265 14:56:19 -- target/multipath.sh@22 -- # local timeout=20 00:15:46.265 14:56:19 -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:15:46.265 14:56:19 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:15:46.265 14:56:19 -- target/multipath.sh@25 -- # [[ non-optimized != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:15:46.265 14:56:19 -- target/multipath.sh@25 -- # sleep 1s 00:15:47.202 14:56:20 -- target/multipath.sh@26 -- # (( timeout-- == 0 )) 00:15:47.202 14:56:20 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:15:47.202 14:56:20 -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:15:47.202 14:56:20 -- target/multipath.sh@132 -- # wait 85911 00:15:49.737 00:15:49.737 job0: (groupid=0, jobs=1): err= 0: pid=85933: Sun Dec 1 14:56:22 2024 00:15:49.737 read: IOPS=13.3k, BW=52.0MiB/s (54.5MB/s)(312MiB/5997msec) 00:15:49.737 slat (nsec): min=1754, max=6734.2k, avg=38071.28, stdev=185416.97 00:15:49.737 clat (usec): min=290, max=17061, avg=6653.03, stdev=1661.89 00:15:49.737 lat (usec): min=310, max=17096, avg=6691.10, stdev=1664.23 00:15:49.737 clat percentiles (usec): 00:15:49.737 | 1.00th=[ 2573], 5.00th=[ 4178], 10.00th=[ 5014], 20.00th=[ 5604], 00:15:49.737 | 30.00th=[ 5932], 40.00th=[ 6194], 50.00th=[ 6521], 60.00th=[ 6849], 00:15:49.737 | 70.00th=[ 7177], 80.00th=[ 7635], 90.00th=[ 8586], 95.00th=[ 9765], 00:15:49.737 | 99.00th=[11994], 99.50th=[12780], 99.90th=[14746], 99.95th=[15139], 00:15:49.737 | 99.99th=[16909] 00:15:49.737 bw ( KiB/s): min=10168, max=35168, per=53.08%, avg=28261.82, stdev=8373.19, samples=11 00:15:49.737 iops : min= 2542, max= 8792, avg=7065.27, stdev=2093.18, samples=11 00:15:49.737 write: IOPS=7959, BW=31.1MiB/s (32.6MB/s)(157MiB/5052msec); 0 zone resets 00:15:49.737 slat (usec): min=2, max=3725, avg=47.24, stdev=122.75 00:15:49.737 clat (usec): min=539, max=14493, avg=5705.43, stdev=1466.14 00:15:49.737 lat (usec): min=568, max=14510, avg=5752.66, stdev=1468.34 00:15:49.737 clat percentiles (usec): 00:15:49.737 | 1.00th=[ 2114], 5.00th=[ 2999], 10.00th=[ 3589], 20.00th=[ 4817], 00:15:49.737 | 30.00th=[ 5276], 40.00th=[ 5538], 50.00th=[ 5800], 60.00th=[ 5997], 00:15:49.737 | 70.00th=[ 6259], 80.00th=[ 6587], 90.00th=[ 7242], 95.00th=[ 8094], 00:15:49.737 | 99.00th=[ 9896], 99.50th=[10552], 99.90th=[11863], 99.95th=[12256], 00:15:49.737 | 99.99th=[13698] 00:15:49.737 bw ( KiB/s): min=10560, max=35984, per=88.79%, avg=28266.18, stdev=8189.44, samples=11 00:15:49.737 iops : min= 2640, max= 8996, avg=7066.55, stdev=2047.36, samples=11 00:15:49.737 lat (usec) : 500=0.01%, 750=0.01%, 1000=0.03% 00:15:49.737 lat (msec) : 2=0.53%, 4=6.64%, 10=89.68%, 20=3.10% 00:15:49.737 cpu : usr=6.03%, sys=22.07%, ctx=7364, majf=0, minf=127 00:15:49.737 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.6% 00:15:49.737 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:49.737 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:15:49.737 issued rwts: total=79820,40209,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:49.737 latency : target=0, window=0, percentile=100.00%, depth=128 00:15:49.737 00:15:49.737 Run status group 0 (all jobs): 00:15:49.737 READ: bw=52.0MiB/s (54.5MB/s), 52.0MiB/s-52.0MiB/s (54.5MB/s-54.5MB/s), io=312MiB (327MB), run=5997-5997msec 00:15:49.737 WRITE: bw=31.1MiB/s (32.6MB/s), 31.1MiB/s-31.1MiB/s (32.6MB/s-32.6MB/s), io=157MiB (165MB), run=5052-5052msec 00:15:49.737 00:15:49.737 Disk stats (read/write): 00:15:49.737 nvme0n1: ios=78833/39404, merge=0/0, ticks=490143/209418, in_queue=699561, util=98.65% 00:15:49.737 14:56:22 -- target/multipath.sh@134 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:15:49.737 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:15:49.737 14:56:22 -- target/multipath.sh@135 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:15:49.737 14:56:22 -- common/autotest_common.sh@1208 -- # local i=0 00:15:49.737 14:56:22 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:15:49.737 14:56:22 -- common/autotest_common.sh@1209 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:49.737 14:56:22 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:15:49.737 14:56:22 -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:49.737 14:56:22 -- common/autotest_common.sh@1220 -- # return 0 00:15:49.737 14:56:22 -- target/multipath.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:49.996 14:56:22 -- target/multipath.sh@139 -- # rm -f ./local-job0-0-verify.state 00:15:49.997 14:56:22 -- target/multipath.sh@140 -- # rm -f ./local-job1-1-verify.state 00:15:49.997 14:56:22 -- target/multipath.sh@142 -- # trap - SIGINT SIGTERM EXIT 00:15:49.997 14:56:22 -- target/multipath.sh@144 -- # nvmftestfini 00:15:49.997 14:56:22 -- nvmf/common.sh@476 -- # nvmfcleanup 00:15:49.997 14:56:22 -- nvmf/common.sh@116 -- # sync 00:15:49.997 14:56:22 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:15:49.997 14:56:22 -- nvmf/common.sh@119 -- # set +e 00:15:49.997 14:56:22 -- nvmf/common.sh@120 -- # for i in {1..20} 00:15:49.997 14:56:22 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:15:49.997 rmmod nvme_tcp 00:15:49.997 rmmod nvme_fabrics 00:15:49.997 rmmod nvme_keyring 00:15:49.997 14:56:23 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:15:49.997 14:56:23 -- nvmf/common.sh@123 -- # set -e 00:15:49.997 14:56:23 -- nvmf/common.sh@124 -- # return 0 00:15:49.997 14:56:23 -- nvmf/common.sh@477 -- # '[' -n 85611 ']' 00:15:49.997 14:56:23 -- nvmf/common.sh@478 -- # killprocess 85611 00:15:49.997 14:56:23 -- common/autotest_common.sh@936 -- # '[' -z 85611 ']' 00:15:49.997 14:56:23 -- common/autotest_common.sh@940 -- # kill -0 85611 00:15:49.997 14:56:23 -- common/autotest_common.sh@941 -- # uname 00:15:49.997 14:56:23 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:15:49.997 14:56:23 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 85611 00:15:49.997 killing process with pid 85611 00:15:49.997 14:56:23 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:15:49.997 14:56:23 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:15:49.997 14:56:23 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 85611' 00:15:49.997 14:56:23 -- common/autotest_common.sh@955 -- # kill 85611 00:15:49.997 14:56:23 -- common/autotest_common.sh@960 -- # wait 85611 00:15:50.257 14:56:23 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:15:50.257 14:56:23 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:15:50.257 14:56:23 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:15:50.257 14:56:23 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:50.257 14:56:23 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:15:50.257 14:56:23 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:50.257 14:56:23 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:50.257 14:56:23 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:50.257 14:56:23 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:15:50.257 00:15:50.257 real 0m20.678s 00:15:50.257 user 1m21.026s 00:15:50.257 sys 0m6.474s 00:15:50.257 14:56:23 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:15:50.257 14:56:23 -- common/autotest_common.sh@10 -- # set +x 00:15:50.257 ************************************ 00:15:50.257 END TEST nvmf_multipath 00:15:50.257 ************************************ 00:15:50.257 14:56:23 -- nvmf/nvmf.sh@52 -- # run_test nvmf_zcopy /home/vagrant/spdk_repo/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:15:50.257 14:56:23 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:15:50.257 14:56:23 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:15:50.257 14:56:23 -- common/autotest_common.sh@10 -- # set +x 00:15:50.516 ************************************ 00:15:50.516 START TEST nvmf_zcopy 00:15:50.516 ************************************ 00:15:50.516 14:56:23 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:15:50.516 * Looking for test storage... 00:15:50.516 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:15:50.516 14:56:23 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:15:50.516 14:56:23 -- common/autotest_common.sh@1690 -- # lcov --version 00:15:50.516 14:56:23 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:15:50.516 14:56:23 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:15:50.516 14:56:23 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:15:50.516 14:56:23 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:15:50.516 14:56:23 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:15:50.516 14:56:23 -- scripts/common.sh@335 -- # IFS=.-: 00:15:50.516 14:56:23 -- scripts/common.sh@335 -- # read -ra ver1 00:15:50.516 14:56:23 -- scripts/common.sh@336 -- # IFS=.-: 00:15:50.516 14:56:23 -- scripts/common.sh@336 -- # read -ra ver2 00:15:50.516 14:56:23 -- scripts/common.sh@337 -- # local 'op=<' 00:15:50.516 14:56:23 -- scripts/common.sh@339 -- # ver1_l=2 00:15:50.516 14:56:23 -- scripts/common.sh@340 -- # ver2_l=1 00:15:50.516 14:56:23 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:15:50.516 14:56:23 -- scripts/common.sh@343 -- # case "$op" in 00:15:50.516 14:56:23 -- scripts/common.sh@344 -- # : 1 00:15:50.516 14:56:23 -- scripts/common.sh@363 -- # (( v = 0 )) 00:15:50.516 14:56:23 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:50.516 14:56:23 -- scripts/common.sh@364 -- # decimal 1 00:15:50.516 14:56:23 -- scripts/common.sh@352 -- # local d=1 00:15:50.516 14:56:23 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:50.516 14:56:23 -- scripts/common.sh@354 -- # echo 1 00:15:50.516 14:56:23 -- scripts/common.sh@364 -- # ver1[v]=1 00:15:50.516 14:56:23 -- scripts/common.sh@365 -- # decimal 2 00:15:50.516 14:56:23 -- scripts/common.sh@352 -- # local d=2 00:15:50.516 14:56:23 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:50.516 14:56:23 -- scripts/common.sh@354 -- # echo 2 00:15:50.516 14:56:23 -- scripts/common.sh@365 -- # ver2[v]=2 00:15:50.516 14:56:23 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:15:50.516 14:56:23 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:15:50.516 14:56:23 -- scripts/common.sh@367 -- # return 0 00:15:50.516 14:56:23 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:50.516 14:56:23 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:15:50.516 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:50.516 --rc genhtml_branch_coverage=1 00:15:50.516 --rc genhtml_function_coverage=1 00:15:50.516 --rc genhtml_legend=1 00:15:50.516 --rc geninfo_all_blocks=1 00:15:50.516 --rc geninfo_unexecuted_blocks=1 00:15:50.516 00:15:50.516 ' 00:15:50.516 14:56:23 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:15:50.516 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:50.516 --rc genhtml_branch_coverage=1 00:15:50.516 --rc genhtml_function_coverage=1 00:15:50.516 --rc genhtml_legend=1 00:15:50.516 --rc geninfo_all_blocks=1 00:15:50.516 --rc geninfo_unexecuted_blocks=1 00:15:50.516 00:15:50.516 ' 00:15:50.516 14:56:23 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:15:50.516 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:50.516 --rc genhtml_branch_coverage=1 00:15:50.516 --rc genhtml_function_coverage=1 00:15:50.516 --rc genhtml_legend=1 00:15:50.516 --rc geninfo_all_blocks=1 00:15:50.516 --rc geninfo_unexecuted_blocks=1 00:15:50.516 00:15:50.516 ' 00:15:50.516 14:56:23 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:15:50.516 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:50.516 --rc genhtml_branch_coverage=1 00:15:50.516 --rc genhtml_function_coverage=1 00:15:50.516 --rc genhtml_legend=1 00:15:50.516 --rc geninfo_all_blocks=1 00:15:50.516 --rc geninfo_unexecuted_blocks=1 00:15:50.516 00:15:50.516 ' 00:15:50.516 14:56:23 -- target/zcopy.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:15:50.516 14:56:23 -- nvmf/common.sh@7 -- # uname -s 00:15:50.516 14:56:23 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:50.516 14:56:23 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:50.516 14:56:23 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:50.516 14:56:23 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:50.516 14:56:23 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:50.516 14:56:23 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:50.516 14:56:23 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:50.516 14:56:23 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:50.516 14:56:23 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:50.516 14:56:23 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:50.516 14:56:23 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:2d843004-a791-47f3-8dd7-3d04462c368b 00:15:50.516 14:56:23 -- nvmf/common.sh@18 -- # NVME_HOSTID=2d843004-a791-47f3-8dd7-3d04462c368b 00:15:50.516 14:56:23 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:50.516 14:56:23 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:50.516 14:56:23 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:15:50.516 14:56:23 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:15:50.516 14:56:23 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:50.516 14:56:23 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:50.516 14:56:23 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:50.517 14:56:23 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:50.517 14:56:23 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:50.517 14:56:23 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:50.517 14:56:23 -- paths/export.sh@5 -- # export PATH 00:15:50.517 14:56:23 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:50.517 14:56:23 -- nvmf/common.sh@46 -- # : 0 00:15:50.517 14:56:23 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:15:50.517 14:56:23 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:15:50.517 14:56:23 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:15:50.517 14:56:23 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:50.517 14:56:23 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:50.517 14:56:23 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:15:50.517 14:56:23 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:15:50.517 14:56:23 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:15:50.517 14:56:23 -- target/zcopy.sh@12 -- # nvmftestinit 00:15:50.517 14:56:23 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:15:50.517 14:56:23 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:50.517 14:56:23 -- nvmf/common.sh@436 -- # prepare_net_devs 00:15:50.517 14:56:23 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:15:50.517 14:56:23 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:15:50.517 14:56:23 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:50.517 14:56:23 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:50.517 14:56:23 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:50.517 14:56:23 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:15:50.517 14:56:23 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:15:50.517 14:56:23 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:15:50.517 14:56:23 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:15:50.517 14:56:23 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:15:50.517 14:56:23 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:15:50.517 14:56:23 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:50.517 14:56:23 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:50.517 14:56:23 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:15:50.517 14:56:23 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:15:50.517 14:56:23 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:15:50.517 14:56:23 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:15:50.517 14:56:23 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:15:50.517 14:56:23 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:50.517 14:56:23 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:15:50.517 14:56:23 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:15:50.517 14:56:23 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:15:50.517 14:56:23 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:15:50.517 14:56:23 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:15:50.517 14:56:23 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:15:50.517 Cannot find device "nvmf_tgt_br" 00:15:50.517 14:56:23 -- nvmf/common.sh@154 -- # true 00:15:50.517 14:56:23 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:15:50.775 Cannot find device "nvmf_tgt_br2" 00:15:50.775 14:56:23 -- nvmf/common.sh@155 -- # true 00:15:50.775 14:56:23 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:15:50.775 14:56:23 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:15:50.775 Cannot find device "nvmf_tgt_br" 00:15:50.775 14:56:23 -- nvmf/common.sh@157 -- # true 00:15:50.775 14:56:23 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:15:50.775 Cannot find device "nvmf_tgt_br2" 00:15:50.775 14:56:23 -- nvmf/common.sh@158 -- # true 00:15:50.775 14:56:23 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:15:50.775 14:56:23 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:15:50.775 14:56:23 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:15:50.775 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:50.775 14:56:23 -- nvmf/common.sh@161 -- # true 00:15:50.775 14:56:23 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:15:50.775 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:50.775 14:56:23 -- nvmf/common.sh@162 -- # true 00:15:50.775 14:56:23 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:15:50.775 14:56:23 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:15:50.775 14:56:23 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:15:50.775 14:56:23 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:15:50.775 14:56:23 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:15:50.775 14:56:23 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:15:50.775 14:56:23 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:15:50.775 14:56:23 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:15:50.775 14:56:23 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:15:50.775 14:56:23 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:15:50.776 14:56:23 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:15:50.776 14:56:23 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:15:50.776 14:56:23 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:15:50.776 14:56:23 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:15:50.776 14:56:23 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:15:50.776 14:56:23 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:15:50.776 14:56:23 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:15:50.776 14:56:23 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:15:50.776 14:56:23 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:15:50.776 14:56:23 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:15:50.776 14:56:23 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:15:50.776 14:56:23 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:15:50.776 14:56:23 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:15:51.034 14:56:23 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:15:51.034 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:51.034 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.050 ms 00:15:51.034 00:15:51.034 --- 10.0.0.2 ping statistics --- 00:15:51.034 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:51.034 rtt min/avg/max/mdev = 0.050/0.050/0.050/0.000 ms 00:15:51.034 14:56:23 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:15:51.034 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:15:51.034 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.054 ms 00:15:51.034 00:15:51.034 --- 10.0.0.3 ping statistics --- 00:15:51.034 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:51.034 rtt min/avg/max/mdev = 0.054/0.054/0.054/0.000 ms 00:15:51.034 14:56:23 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:15:51.034 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:51.034 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.033 ms 00:15:51.034 00:15:51.034 --- 10.0.0.1 ping statistics --- 00:15:51.034 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:51.034 rtt min/avg/max/mdev = 0.033/0.033/0.033/0.000 ms 00:15:51.034 14:56:23 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:51.034 14:56:23 -- nvmf/common.sh@421 -- # return 0 00:15:51.034 14:56:23 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:15:51.034 14:56:23 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:51.034 14:56:23 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:15:51.034 14:56:23 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:15:51.034 14:56:23 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:51.034 14:56:23 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:15:51.034 14:56:23 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:15:51.034 14:56:23 -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:15:51.034 14:56:23 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:15:51.034 14:56:23 -- common/autotest_common.sh@722 -- # xtrace_disable 00:15:51.034 14:56:23 -- common/autotest_common.sh@10 -- # set +x 00:15:51.034 14:56:23 -- nvmf/common.sh@469 -- # nvmfpid=86219 00:15:51.034 14:56:23 -- nvmf/common.sh@470 -- # waitforlisten 86219 00:15:51.034 14:56:23 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:15:51.034 14:56:23 -- common/autotest_common.sh@829 -- # '[' -z 86219 ']' 00:15:51.034 14:56:23 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:51.034 14:56:23 -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:51.034 14:56:23 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:51.034 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:51.034 14:56:23 -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:51.034 14:56:23 -- common/autotest_common.sh@10 -- # set +x 00:15:51.034 [2024-12-01 14:56:23.978435] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:15:51.034 [2024-12-01 14:56:23.978538] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:51.034 [2024-12-01 14:56:24.110750] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:51.293 [2024-12-01 14:56:24.211639] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:15:51.293 [2024-12-01 14:56:24.211830] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:51.293 [2024-12-01 14:56:24.211848] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:51.293 [2024-12-01 14:56:24.211859] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:51.293 [2024-12-01 14:56:24.211887] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:15:52.229 14:56:24 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:52.229 14:56:24 -- common/autotest_common.sh@862 -- # return 0 00:15:52.229 14:56:24 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:15:52.229 14:56:24 -- common/autotest_common.sh@728 -- # xtrace_disable 00:15:52.229 14:56:24 -- common/autotest_common.sh@10 -- # set +x 00:15:52.229 14:56:25 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:52.229 14:56:25 -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:15:52.229 14:56:25 -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:15:52.229 14:56:25 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:52.229 14:56:25 -- common/autotest_common.sh@10 -- # set +x 00:15:52.229 [2024-12-01 14:56:25.028291] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:52.229 14:56:25 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:52.229 14:56:25 -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:15:52.229 14:56:25 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:52.229 14:56:25 -- common/autotest_common.sh@10 -- # set +x 00:15:52.229 14:56:25 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:52.229 14:56:25 -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:52.229 14:56:25 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:52.229 14:56:25 -- common/autotest_common.sh@10 -- # set +x 00:15:52.229 [2024-12-01 14:56:25.044434] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:52.229 14:56:25 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:52.229 14:56:25 -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:15:52.229 14:56:25 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:52.229 14:56:25 -- common/autotest_common.sh@10 -- # set +x 00:15:52.229 14:56:25 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:52.229 14:56:25 -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:15:52.229 14:56:25 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:52.229 14:56:25 -- common/autotest_common.sh@10 -- # set +x 00:15:52.229 malloc0 00:15:52.229 14:56:25 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:52.229 14:56:25 -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:15:52.229 14:56:25 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:52.229 14:56:25 -- common/autotest_common.sh@10 -- # set +x 00:15:52.229 14:56:25 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:52.229 14:56:25 -- target/zcopy.sh@33 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:15:52.229 14:56:25 -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:15:52.229 14:56:25 -- nvmf/common.sh@520 -- # config=() 00:15:52.229 14:56:25 -- nvmf/common.sh@520 -- # local subsystem config 00:15:52.229 14:56:25 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:15:52.229 14:56:25 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:15:52.229 { 00:15:52.229 "params": { 00:15:52.229 "name": "Nvme$subsystem", 00:15:52.229 "trtype": "$TEST_TRANSPORT", 00:15:52.229 "traddr": "$NVMF_FIRST_TARGET_IP", 00:15:52.229 "adrfam": "ipv4", 00:15:52.229 "trsvcid": "$NVMF_PORT", 00:15:52.229 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:15:52.229 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:15:52.229 "hdgst": ${hdgst:-false}, 00:15:52.229 "ddgst": ${ddgst:-false} 00:15:52.229 }, 00:15:52.229 "method": "bdev_nvme_attach_controller" 00:15:52.229 } 00:15:52.229 EOF 00:15:52.229 )") 00:15:52.229 14:56:25 -- nvmf/common.sh@542 -- # cat 00:15:52.229 14:56:25 -- nvmf/common.sh@544 -- # jq . 00:15:52.229 14:56:25 -- nvmf/common.sh@545 -- # IFS=, 00:15:52.229 14:56:25 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:15:52.229 "params": { 00:15:52.229 "name": "Nvme1", 00:15:52.229 "trtype": "tcp", 00:15:52.229 "traddr": "10.0.0.2", 00:15:52.229 "adrfam": "ipv4", 00:15:52.229 "trsvcid": "4420", 00:15:52.229 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:15:52.229 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:15:52.229 "hdgst": false, 00:15:52.229 "ddgst": false 00:15:52.229 }, 00:15:52.229 "method": "bdev_nvme_attach_controller" 00:15:52.229 }' 00:15:52.229 [2024-12-01 14:56:25.143221] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:15:52.229 [2024-12-01 14:56:25.143328] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86270 ] 00:15:52.229 [2024-12-01 14:56:25.287597] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:52.486 [2024-12-01 14:56:25.349287] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:52.486 Running I/O for 10 seconds... 00:16:02.464 00:16:02.464 Latency(us) 00:16:02.464 [2024-12-01T14:56:35.579Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:02.464 [2024-12-01T14:56:35.579Z] Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:16:02.464 Verification LBA range: start 0x0 length 0x1000 00:16:02.464 Nvme1n1 : 10.01 10399.50 81.25 0.00 0.00 12279.02 916.01 18588.39 00:16:02.464 [2024-12-01T14:56:35.579Z] =================================================================================================================== 00:16:02.464 [2024-12-01T14:56:35.579Z] Total : 10399.50 81.25 0.00 0.00 12279.02 916.01 18588.39 00:16:02.722 14:56:35 -- target/zcopy.sh@39 -- # perfpid=86387 00:16:02.722 14:56:35 -- target/zcopy.sh@41 -- # xtrace_disable 00:16:02.722 14:56:35 -- common/autotest_common.sh@10 -- # set +x 00:16:02.723 14:56:35 -- target/zcopy.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:16:02.723 14:56:35 -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:16:02.723 14:56:35 -- nvmf/common.sh@520 -- # config=() 00:16:02.723 14:56:35 -- nvmf/common.sh@520 -- # local subsystem config 00:16:02.723 14:56:35 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:16:02.723 14:56:35 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:16:02.723 { 00:16:02.723 "params": { 00:16:02.723 "name": "Nvme$subsystem", 00:16:02.723 "trtype": "$TEST_TRANSPORT", 00:16:02.723 "traddr": "$NVMF_FIRST_TARGET_IP", 00:16:02.723 "adrfam": "ipv4", 00:16:02.723 "trsvcid": "$NVMF_PORT", 00:16:02.723 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:16:02.723 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:16:02.723 "hdgst": ${hdgst:-false}, 00:16:02.723 "ddgst": ${ddgst:-false} 00:16:02.723 }, 00:16:02.723 "method": "bdev_nvme_attach_controller" 00:16:02.723 } 00:16:02.723 EOF 00:16:02.723 )") 00:16:02.723 14:56:35 -- nvmf/common.sh@542 -- # cat 00:16:02.723 [2024-12-01 14:56:35.718651] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:02.723 [2024-12-01 14:56:35.718703] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:02.723 14:56:35 -- nvmf/common.sh@544 -- # jq . 00:16:02.723 2024/12/01 14:56:35 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:02.723 14:56:35 -- nvmf/common.sh@545 -- # IFS=, 00:16:02.723 14:56:35 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:16:02.723 "params": { 00:16:02.723 "name": "Nvme1", 00:16:02.723 "trtype": "tcp", 00:16:02.723 "traddr": "10.0.0.2", 00:16:02.723 "adrfam": "ipv4", 00:16:02.723 "trsvcid": "4420", 00:16:02.723 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:16:02.723 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:16:02.723 "hdgst": false, 00:16:02.723 "ddgst": false 00:16:02.723 }, 00:16:02.723 "method": "bdev_nvme_attach_controller" 00:16:02.723 }' 00:16:02.723 [2024-12-01 14:56:35.730599] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:02.723 [2024-12-01 14:56:35.730650] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:02.723 2024/12/01 14:56:35 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:02.723 [2024-12-01 14:56:35.738576] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:02.723 [2024-12-01 14:56:35.738607] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:02.723 2024/12/01 14:56:35 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:02.723 [2024-12-01 14:56:35.750612] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:02.723 [2024-12-01 14:56:35.750661] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:02.723 2024/12/01 14:56:35 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:02.723 [2024-12-01 14:56:35.762614] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:02.723 [2024-12-01 14:56:35.762662] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:02.723 2024/12/01 14:56:35 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:02.723 [2024-12-01 14:56:35.767578] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:16:02.723 [2024-12-01 14:56:35.768198] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86387 ] 00:16:02.723 [2024-12-01 14:56:35.774616] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:02.723 [2024-12-01 14:56:35.774661] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:02.723 2024/12/01 14:56:35 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:02.723 [2024-12-01 14:56:35.786616] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:02.723 [2024-12-01 14:56:35.786660] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:02.723 2024/12/01 14:56:35 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:02.723 [2024-12-01 14:56:35.798601] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:02.723 [2024-12-01 14:56:35.798643] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:02.723 2024/12/01 14:56:35 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:02.723 [2024-12-01 14:56:35.810604] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:02.723 [2024-12-01 14:56:35.810645] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:02.723 2024/12/01 14:56:35 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:02.723 [2024-12-01 14:56:35.822605] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:02.723 [2024-12-01 14:56:35.822645] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:02.723 2024/12/01 14:56:35 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:02.723 [2024-12-01 14:56:35.834622] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:02.723 [2024-12-01 14:56:35.834664] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:02.982 2024/12/01 14:56:35 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:02.982 [2024-12-01 14:56:35.846610] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:02.982 [2024-12-01 14:56:35.846650] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:02.982 2024/12/01 14:56:35 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:02.982 [2024-12-01 14:56:35.858612] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:02.982 [2024-12-01 14:56:35.858654] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:02.982 2024/12/01 14:56:35 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:02.982 [2024-12-01 14:56:35.870636] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:02.982 [2024-12-01 14:56:35.870680] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:02.982 2024/12/01 14:56:35 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:02.982 [2024-12-01 14:56:35.882648] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:02.982 [2024-12-01 14:56:35.882690] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:02.982 2024/12/01 14:56:35 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:02.982 [2024-12-01 14:56:35.894624] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:02.982 [2024-12-01 14:56:35.894668] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:02.982 2024/12/01 14:56:35 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:02.982 [2024-12-01 14:56:35.906133] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:02.982 [2024-12-01 14:56:35.906653] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:02.982 [2024-12-01 14:56:35.906680] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:02.982 2024/12/01 14:56:35 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:02.982 [2024-12-01 14:56:35.914622] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:02.982 [2024-12-01 14:56:35.914662] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:02.982 2024/12/01 14:56:35 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:02.982 [2024-12-01 14:56:35.922634] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:02.982 [2024-12-01 14:56:35.922662] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:02.982 2024/12/01 14:56:35 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:02.982 [2024-12-01 14:56:35.930641] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:02.982 [2024-12-01 14:56:35.930669] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:02.982 2024/12/01 14:56:35 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:02.982 [2024-12-01 14:56:35.938633] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:02.982 [2024-12-01 14:56:35.938661] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:02.982 2024/12/01 14:56:35 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:02.982 [2024-12-01 14:56:35.946649] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:02.982 [2024-12-01 14:56:35.946675] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:02.982 [2024-12-01 14:56:35.949087] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:02.982 2024/12/01 14:56:35 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:02.982 [2024-12-01 14:56:35.954633] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:02.982 [2024-12-01 14:56:35.954675] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:02.982 2024/12/01 14:56:35 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:02.982 [2024-12-01 14:56:35.962651] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:02.982 [2024-12-01 14:56:35.962678] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:02.983 2024/12/01 14:56:35 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:02.983 [2024-12-01 14:56:35.970634] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:02.983 [2024-12-01 14:56:35.970675] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:02.983 2024/12/01 14:56:35 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:02.983 [2024-12-01 14:56:35.978652] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:02.983 [2024-12-01 14:56:35.978679] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:02.983 2024/12/01 14:56:35 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:02.983 [2024-12-01 14:56:35.986644] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:02.983 [2024-12-01 14:56:35.986671] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:02.983 2024/12/01 14:56:35 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:02.983 [2024-12-01 14:56:35.994642] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:02.983 [2024-12-01 14:56:35.994683] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:02.983 2024/12/01 14:56:35 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:02.983 [2024-12-01 14:56:36.002661] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:02.983 [2024-12-01 14:56:36.002689] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:02.983 2024/12/01 14:56:36 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:02.983 [2024-12-01 14:56:36.010658] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:02.983 [2024-12-01 14:56:36.010685] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:02.983 2024/12/01 14:56:36 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:02.983 [2024-12-01 14:56:36.018659] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:02.983 [2024-12-01 14:56:36.018687] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:02.983 2024/12/01 14:56:36 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:02.983 [2024-12-01 14:56:36.030653] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:02.983 [2024-12-01 14:56:36.030696] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:02.983 2024/12/01 14:56:36 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:02.983 [2024-12-01 14:56:36.038668] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:02.983 [2024-12-01 14:56:36.038711] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:02.983 2024/12/01 14:56:36 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:02.983 [2024-12-01 14:56:36.046675] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:02.983 [2024-12-01 14:56:36.046706] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:02.983 2024/12/01 14:56:36 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:02.983 [2024-12-01 14:56:36.054679] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:02.983 [2024-12-01 14:56:36.054709] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:02.983 2024/12/01 14:56:36 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:02.983 [2024-12-01 14:56:36.062681] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:02.983 [2024-12-01 14:56:36.062710] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:02.983 2024/12/01 14:56:36 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:02.983 [2024-12-01 14:56:36.070683] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:02.983 [2024-12-01 14:56:36.070712] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:02.983 2024/12/01 14:56:36 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:02.983 [2024-12-01 14:56:36.078682] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:02.983 [2024-12-01 14:56:36.078740] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:02.983 2024/12/01 14:56:36 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:02.983 [2024-12-01 14:56:36.086688] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:02.983 [2024-12-01 14:56:36.086717] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:02.983 2024/12/01 14:56:36 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:02.983 [2024-12-01 14:56:36.094667] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:02.983 [2024-12-01 14:56:36.094709] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:03.241 2024/12/01 14:56:36 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:03.241 [2024-12-01 14:56:36.102707] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:03.241 [2024-12-01 14:56:36.102738] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:03.241 2024/12/01 14:56:36 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:03.241 Running I/O for 5 seconds... 00:16:03.241 [2024-12-01 14:56:36.110690] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:03.241 [2024-12-01 14:56:36.110719] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:03.241 2024/12/01 14:56:36 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:03.241 [2024-12-01 14:56:36.121623] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:03.241 [2024-12-01 14:56:36.121654] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:03.241 2024/12/01 14:56:36 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:03.241 [2024-12-01 14:56:36.130749] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:03.241 [2024-12-01 14:56:36.130809] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:03.241 2024/12/01 14:56:36 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:03.241 [2024-12-01 14:56:36.141535] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:03.241 [2024-12-01 14:56:36.141566] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:03.241 2024/12/01 14:56:36 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:03.241 [2024-12-01 14:56:36.151562] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:03.241 [2024-12-01 14:56:36.151594] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:03.241 2024/12/01 14:56:36 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:03.242 [2024-12-01 14:56:36.161986] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:03.242 [2024-12-01 14:56:36.162018] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:03.242 2024/12/01 14:56:36 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:03.242 [2024-12-01 14:56:36.174194] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:03.242 [2024-12-01 14:56:36.174227] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:03.242 2024/12/01 14:56:36 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:03.242 [2024-12-01 14:56:36.184080] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:03.242 [2024-12-01 14:56:36.184127] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:03.242 2024/12/01 14:56:36 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:03.242 [2024-12-01 14:56:36.197970] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:03.242 [2024-12-01 14:56:36.198018] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:03.242 2024/12/01 14:56:36 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:03.242 [2024-12-01 14:56:36.207324] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:03.242 [2024-12-01 14:56:36.207357] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:03.242 2024/12/01 14:56:36 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:03.242 [2024-12-01 14:56:36.217053] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:03.242 [2024-12-01 14:56:36.217086] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:03.242 2024/12/01 14:56:36 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:03.242 [2024-12-01 14:56:36.230359] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:03.242 [2024-12-01 14:56:36.230390] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:03.242 2024/12/01 14:56:36 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:03.242 [2024-12-01 14:56:36.245351] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:03.242 [2024-12-01 14:56:36.245399] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:03.242 2024/12/01 14:56:36 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:03.242 [2024-12-01 14:56:36.256580] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:03.242 [2024-12-01 14:56:36.256614] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:03.242 2024/12/01 14:56:36 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:03.242 [2024-12-01 14:56:36.265638] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:03.242 [2024-12-01 14:56:36.265669] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:03.242 2024/12/01 14:56:36 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:03.242 [2024-12-01 14:56:36.276467] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:03.242 [2024-12-01 14:56:36.276500] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:03.242 2024/12/01 14:56:36 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:03.242 [2024-12-01 14:56:36.288031] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:03.242 [2024-12-01 14:56:36.288080] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:03.242 2024/12/01 14:56:36 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:03.242 [2024-12-01 14:56:36.297686] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:03.242 [2024-12-01 14:56:36.297718] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:03.242 2024/12/01 14:56:36 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:03.242 [2024-12-01 14:56:36.310463] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:03.242 [2024-12-01 14:56:36.310493] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:03.242 2024/12/01 14:56:36 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:03.242 [2024-12-01 14:56:36.320153] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:03.242 [2024-12-01 14:56:36.320184] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:03.242 2024/12/01 14:56:36 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:03.242 [2024-12-01 14:56:36.333626] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:03.242 [2024-12-01 14:56:36.333657] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:03.242 2024/12/01 14:56:36 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:03.242 [2024-12-01 14:56:36.343248] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:03.242 [2024-12-01 14:56:36.343281] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:03.242 2024/12/01 14:56:36 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:03.501 [2024-12-01 14:56:36.357690] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:03.501 [2024-12-01 14:56:36.357724] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:03.501 2024/12/01 14:56:36 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:03.501 [2024-12-01 14:56:36.368583] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:03.501 [2024-12-01 14:56:36.368614] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:03.501 2024/12/01 14:56:36 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:03.501 [2024-12-01 14:56:36.377417] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:03.501 [2024-12-01 14:56:36.377449] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:03.501 2024/12/01 14:56:36 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:03.501 [2024-12-01 14:56:36.388443] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:03.501 [2024-12-01 14:56:36.388474] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:03.501 2024/12/01 14:56:36 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:03.501 [2024-12-01 14:56:36.398689] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:03.501 [2024-12-01 14:56:36.398721] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:03.501 2024/12/01 14:56:36 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:03.501 [2024-12-01 14:56:36.408933] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:03.501 [2024-12-01 14:56:36.408966] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:03.501 2024/12/01 14:56:36 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:03.501 [2024-12-01 14:56:36.422070] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:03.501 [2024-12-01 14:56:36.422133] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:03.501 2024/12/01 14:56:36 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:03.501 [2024-12-01 14:56:36.431575] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:03.501 [2024-12-01 14:56:36.431605] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:03.501 2024/12/01 14:56:36 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:03.501 [2024-12-01 14:56:36.441178] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:03.501 [2024-12-01 14:56:36.441231] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:03.502 2024/12/01 14:56:36 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:03.502 [2024-12-01 14:56:36.451512] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:03.502 [2024-12-01 14:56:36.451544] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:03.502 2024/12/01 14:56:36 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:03.502 [2024-12-01 14:56:36.463481] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:03.502 [2024-12-01 14:56:36.463514] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:03.502 2024/12/01 14:56:36 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:03.502 [2024-12-01 14:56:36.472611] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:03.502 [2024-12-01 14:56:36.472645] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:03.502 2024/12/01 14:56:36 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:03.502 [2024-12-01 14:56:36.482979] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:03.502 [2024-12-01 14:56:36.483029] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:03.502 2024/12/01 14:56:36 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:03.502 [2024-12-01 14:56:36.494617] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:03.502 [2024-12-01 14:56:36.494649] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:03.502 2024/12/01 14:56:36 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:03.502 [2024-12-01 14:56:36.510319] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:03.502 [2024-12-01 14:56:36.510352] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:03.502 2024/12/01 14:56:36 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:03.502 [2024-12-01 14:56:36.520038] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:03.502 [2024-12-01 14:56:36.520085] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:03.502 2024/12/01 14:56:36 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:03.502 [2024-12-01 14:56:36.532773] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:03.502 [2024-12-01 14:56:36.532804] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:03.502 2024/12/01 14:56:36 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:03.502 [2024-12-01 14:56:36.542007] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:03.502 [2024-12-01 14:56:36.542056] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:03.502 2024/12/01 14:56:36 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:03.502 [2024-12-01 14:56:36.556526] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:03.502 [2024-12-01 14:56:36.556559] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:03.502 2024/12/01 14:56:36 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:03.502 [2024-12-01 14:56:36.574609] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:03.502 [2024-12-01 14:56:36.574643] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:03.502 2024/12/01 14:56:36 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:03.502 [2024-12-01 14:56:36.588560] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:03.502 [2024-12-01 14:56:36.588593] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:03.502 2024/12/01 14:56:36 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:03.502 [2024-12-01 14:56:36.597015] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:03.502 [2024-12-01 14:56:36.597061] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:03.502 2024/12/01 14:56:36 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:03.502 [2024-12-01 14:56:36.611931] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:03.502 [2024-12-01 14:56:36.611963] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:03.761 2024/12/01 14:56:36 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:03.761 [2024-12-01 14:56:36.621693] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:03.761 [2024-12-01 14:56:36.621725] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:03.761 2024/12/01 14:56:36 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:03.761 [2024-12-01 14:56:36.637139] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:03.761 [2024-12-01 14:56:36.637172] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:03.761 2024/12/01 14:56:36 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:03.761 [2024-12-01 14:56:36.652352] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:03.761 [2024-12-01 14:56:36.652383] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:03.761 2024/12/01 14:56:36 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:03.761 [2024-12-01 14:56:36.660968] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:03.761 [2024-12-01 14:56:36.660999] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:03.761 2024/12/01 14:56:36 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:03.761 [2024-12-01 14:56:36.672054] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:03.761 [2024-12-01 14:56:36.672089] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:03.761 2024/12/01 14:56:36 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:03.761 [2024-12-01 14:56:36.682753] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:03.761 [2024-12-01 14:56:36.682840] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:03.761 2024/12/01 14:56:36 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:03.761 [2024-12-01 14:56:36.692996] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:03.761 [2024-12-01 14:56:36.693029] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:03.761 2024/12/01 14:56:36 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:03.761 [2024-12-01 14:56:36.705031] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:03.761 [2024-12-01 14:56:36.705062] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:03.761 2024/12/01 14:56:36 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:03.761 [2024-12-01 14:56:36.714548] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:03.761 [2024-12-01 14:56:36.714582] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:03.761 2024/12/01 14:56:36 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:03.761 [2024-12-01 14:56:36.724596] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:03.761 [2024-12-01 14:56:36.724624] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:03.761 2024/12/01 14:56:36 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:03.761 [2024-12-01 14:56:36.735056] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:03.761 [2024-12-01 14:56:36.735109] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:03.761 2024/12/01 14:56:36 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:03.761 [2024-12-01 14:56:36.745725] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:03.761 [2024-12-01 14:56:36.745771] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:03.761 2024/12/01 14:56:36 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:03.761 [2024-12-01 14:56:36.756391] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:03.761 [2024-12-01 14:56:36.756423] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:03.761 2024/12/01 14:56:36 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:03.761 [2024-12-01 14:56:36.768366] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:03.761 [2024-12-01 14:56:36.768398] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:03.761 2024/12/01 14:56:36 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:03.762 [2024-12-01 14:56:36.785361] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:03.762 [2024-12-01 14:56:36.785409] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:03.762 2024/12/01 14:56:36 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:03.762 [2024-12-01 14:56:36.800727] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:03.762 [2024-12-01 14:56:36.800774] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:03.762 2024/12/01 14:56:36 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:03.762 [2024-12-01 14:56:36.818809] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:03.762 [2024-12-01 14:56:36.818858] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:03.762 2024/12/01 14:56:36 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:03.762 [2024-12-01 14:56:36.828182] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:03.762 [2024-12-01 14:56:36.828212] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:03.762 2024/12/01 14:56:36 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:03.762 [2024-12-01 14:56:36.841935] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:03.762 [2024-12-01 14:56:36.841965] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:03.762 2024/12/01 14:56:36 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:03.762 [2024-12-01 14:56:36.851282] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:03.762 [2024-12-01 14:56:36.851314] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:03.762 2024/12/01 14:56:36 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:03.762 [2024-12-01 14:56:36.865650] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:03.762 [2024-12-01 14:56:36.865682] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:03.762 2024/12/01 14:56:36 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:04.020 [2024-12-01 14:56:36.874721] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:04.020 [2024-12-01 14:56:36.874783] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:04.020 2024/12/01 14:56:36 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:04.020 [2024-12-01 14:56:36.890594] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:04.020 [2024-12-01 14:56:36.890628] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:04.020 2024/12/01 14:56:36 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:04.020 [2024-12-01 14:56:36.909309] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:04.020 [2024-12-01 14:56:36.909360] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:04.020 2024/12/01 14:56:36 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:04.020 [2024-12-01 14:56:36.920204] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:04.020 [2024-12-01 14:56:36.920236] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:04.020 2024/12/01 14:56:36 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:04.020 [2024-12-01 14:56:36.937100] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:04.020 [2024-12-01 14:56:36.937158] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:04.020 2024/12/01 14:56:36 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:04.020 [2024-12-01 14:56:36.954026] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:04.020 [2024-12-01 14:56:36.954058] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:04.020 2024/12/01 14:56:36 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:04.020 [2024-12-01 14:56:36.970619] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:04.020 [2024-12-01 14:56:36.970652] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:04.020 2024/12/01 14:56:36 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:04.020 [2024-12-01 14:56:36.981722] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:04.020 [2024-12-01 14:56:36.981784] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:04.020 2024/12/01 14:56:36 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:04.020 [2024-12-01 14:56:36.998269] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:04.020 [2024-12-01 14:56:36.998302] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:04.020 2024/12/01 14:56:37 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:04.020 [2024-12-01 14:56:37.009367] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:04.020 [2024-12-01 14:56:37.009443] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:04.020 2024/12/01 14:56:37 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:04.020 [2024-12-01 14:56:37.019178] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:04.020 [2024-12-01 14:56:37.019209] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:04.020 2024/12/01 14:56:37 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:04.020 [2024-12-01 14:56:37.029175] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:04.020 [2024-12-01 14:56:37.029227] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:04.020 2024/12/01 14:56:37 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:04.020 [2024-12-01 14:56:37.040856] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:04.020 [2024-12-01 14:56:37.040902] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:04.020 2024/12/01 14:56:37 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:04.020 [2024-12-01 14:56:37.050480] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:04.020 [2024-12-01 14:56:37.050512] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:04.020 2024/12/01 14:56:37 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:04.020 [2024-12-01 14:56:37.060254] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:04.020 [2024-12-01 14:56:37.060287] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:04.020 2024/12/01 14:56:37 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:04.020 [2024-12-01 14:56:37.070068] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:04.020 [2024-12-01 14:56:37.070099] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:04.020 2024/12/01 14:56:37 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:04.020 [2024-12-01 14:56:37.083926] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:04.020 [2024-12-01 14:56:37.083975] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:04.020 2024/12/01 14:56:37 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:04.020 [2024-12-01 14:56:37.092562] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:04.020 [2024-12-01 14:56:37.092592] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:04.020 2024/12/01 14:56:37 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:04.020 [2024-12-01 14:56:37.107325] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:04.020 [2024-12-01 14:56:37.107356] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:04.020 2024/12/01 14:56:37 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:04.020 [2024-12-01 14:56:37.118420] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:04.020 [2024-12-01 14:56:37.118452] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:04.020 2024/12/01 14:56:37 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:04.020 [2024-12-01 14:56:37.127461] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:04.020 [2024-12-01 14:56:37.127494] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:04.021 2024/12/01 14:56:37 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:04.279 [2024-12-01 14:56:37.138161] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:04.279 [2024-12-01 14:56:37.138194] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:04.279 2024/12/01 14:56:37 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:04.279 [2024-12-01 14:56:37.150009] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:04.279 [2024-12-01 14:56:37.150041] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:04.279 2024/12/01 14:56:37 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:04.279 [2024-12-01 14:56:37.165243] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:04.279 [2024-12-01 14:56:37.165291] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:04.279 2024/12/01 14:56:37 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:04.279 [2024-12-01 14:56:37.176710] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:04.279 [2024-12-01 14:56:37.176742] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:04.279 2024/12/01 14:56:37 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:04.279 [2024-12-01 14:56:37.185488] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:04.279 [2024-12-01 14:56:37.185520] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:04.279 2024/12/01 14:56:37 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:04.279 [2024-12-01 14:56:37.196346] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:04.279 [2024-12-01 14:56:37.196378] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:04.279 2024/12/01 14:56:37 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:04.279 [2024-12-01 14:56:37.208286] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:04.279 [2024-12-01 14:56:37.208319] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:04.279 2024/12/01 14:56:37 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:04.279 [2024-12-01 14:56:37.217377] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:04.279 [2024-12-01 14:56:37.217444] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:04.280 2024/12/01 14:56:37 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:04.280 [2024-12-01 14:56:37.231854] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:04.280 [2024-12-01 14:56:37.231904] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:04.280 2024/12/01 14:56:37 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:04.280 [2024-12-01 14:56:37.243046] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:04.280 [2024-12-01 14:56:37.243094] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:04.280 2024/12/01 14:56:37 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:04.280 [2024-12-01 14:56:37.251451] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:04.280 [2024-12-01 14:56:37.251482] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:04.280 2024/12/01 14:56:37 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:04.280 [2024-12-01 14:56:37.262301] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:04.280 [2024-12-01 14:56:37.262333] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:04.280 2024/12/01 14:56:37 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:04.280 [2024-12-01 14:56:37.272040] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:04.280 [2024-12-01 14:56:37.272089] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:04.280 2024/12/01 14:56:37 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:04.280 [2024-12-01 14:56:37.281414] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:04.280 [2024-12-01 14:56:37.281446] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:04.280 2024/12/01 14:56:37 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:04.280 [2024-12-01 14:56:37.291001] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:04.280 [2024-12-01 14:56:37.291035] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:04.280 2024/12/01 14:56:37 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:04.280 [2024-12-01 14:56:37.304425] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:04.280 [2024-12-01 14:56:37.304454] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:04.280 2024/12/01 14:56:37 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:04.280 [2024-12-01 14:56:37.313752] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:04.280 [2024-12-01 14:56:37.313813] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:04.280 2024/12/01 14:56:37 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:04.280 [2024-12-01 14:56:37.323328] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:04.280 [2024-12-01 14:56:37.323361] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:04.280 2024/12/01 14:56:37 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:04.280 [2024-12-01 14:56:37.333416] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:04.280 [2024-12-01 14:56:37.333446] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:04.280 2024/12/01 14:56:37 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:04.280 [2024-12-01 14:56:37.343338] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:04.280 [2024-12-01 14:56:37.343371] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:04.280 2024/12/01 14:56:37 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:04.280 [2024-12-01 14:56:37.353159] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:04.280 [2024-12-01 14:56:37.353206] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:04.280 2024/12/01 14:56:37 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:04.280 [2024-12-01 14:56:37.363043] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:04.280 [2024-12-01 14:56:37.363092] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:04.280 2024/12/01 14:56:37 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:04.280 [2024-12-01 14:56:37.373279] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:04.280 [2024-12-01 14:56:37.373328] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:04.280 2024/12/01 14:56:37 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:04.280 [2024-12-01 14:56:37.383167] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:04.280 [2024-12-01 14:56:37.383200] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:04.280 2024/12/01 14:56:37 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:04.539 [2024-12-01 14:56:37.393362] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:04.539 [2024-12-01 14:56:37.393395] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:04.539 2024/12/01 14:56:37 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:04.539 [2024-12-01 14:56:37.405014] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:04.539 [2024-12-01 14:56:37.405048] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:04.539 2024/12/01 14:56:37 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:04.539 [2024-12-01 14:56:37.414490] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:04.539 [2024-12-01 14:56:37.414523] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:04.539 2024/12/01 14:56:37 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:04.539 [2024-12-01 14:56:37.424439] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:04.539 [2024-12-01 14:56:37.424471] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:04.539 2024/12/01 14:56:37 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:04.539 [2024-12-01 14:56:37.436174] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:04.539 [2024-12-01 14:56:37.436206] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:04.539 2024/12/01 14:56:37 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:04.539 [2024-12-01 14:56:37.450219] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:04.539 [2024-12-01 14:56:37.450251] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:04.539 2024/12/01 14:56:37 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:04.539 [2024-12-01 14:56:37.459426] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:04.539 [2024-12-01 14:56:37.459458] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:04.539 2024/12/01 14:56:37 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:04.539 [2024-12-01 14:56:37.469102] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:04.539 [2024-12-01 14:56:37.469175] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:04.539 2024/12/01 14:56:37 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:04.539 [2024-12-01 14:56:37.479114] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:04.539 [2024-12-01 14:56:37.479162] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:04.539 2024/12/01 14:56:37 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:04.539 [2024-12-01 14:56:37.488735] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:04.539 [2024-12-01 14:56:37.488776] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:04.539 2024/12/01 14:56:37 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:04.539 [2024-12-01 14:56:37.500220] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:04.539 [2024-12-01 14:56:37.500253] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:04.539 2024/12/01 14:56:37 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:04.539 [2024-12-01 14:56:37.511266] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:04.539 [2024-12-01 14:56:37.511297] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:04.539 2024/12/01 14:56:37 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:04.539 [2024-12-01 14:56:37.519799] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:04.539 [2024-12-01 14:56:37.519844] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:04.539 2024/12/01 14:56:37 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:04.539 [2024-12-01 14:56:37.530668] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:04.540 [2024-12-01 14:56:37.530699] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:04.540 2024/12/01 14:56:37 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:04.540 [2024-12-01 14:56:37.542114] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:04.540 [2024-12-01 14:56:37.542162] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:04.540 2024/12/01 14:56:37 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:04.540 [2024-12-01 14:56:37.557177] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:04.540 [2024-12-01 14:56:37.557227] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:04.540 2024/12/01 14:56:37 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:04.540 [2024-12-01 14:56:37.568173] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:04.540 [2024-12-01 14:56:37.568206] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:04.540 2024/12/01 14:56:37 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:04.540 [2024-12-01 14:56:37.584577] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:04.540 [2024-12-01 14:56:37.584608] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:04.540 2024/12/01 14:56:37 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:04.540 [2024-12-01 14:56:37.596038] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:04.540 [2024-12-01 14:56:37.596087] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:04.540 2024/12/01 14:56:37 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:04.540 [2024-12-01 14:56:37.605921] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:04.540 [2024-12-01 14:56:37.605969] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:04.540 2024/12/01 14:56:37 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:04.540 [2024-12-01 14:56:37.615727] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:04.540 [2024-12-01 14:56:37.615771] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:04.540 2024/12/01 14:56:37 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:04.540 [2024-12-01 14:56:37.625874] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:04.540 [2024-12-01 14:56:37.625922] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:04.540 2024/12/01 14:56:37 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:04.540 [2024-12-01 14:56:37.638790] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:04.540 [2024-12-01 14:56:37.638820] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:04.540 2024/12/01 14:56:37 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:04.799 [2024-12-01 14:56:37.655274] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:04.799 [2024-12-01 14:56:37.655307] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:04.799 2024/12/01 14:56:37 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:04.799 [2024-12-01 14:56:37.672012] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:04.799 [2024-12-01 14:56:37.672047] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:04.799 2024/12/01 14:56:37 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:04.799 [2024-12-01 14:56:37.688592] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:04.799 [2024-12-01 14:56:37.688624] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:04.799 2024/12/01 14:56:37 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:04.799 [2024-12-01 14:56:37.704695] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:04.799 [2024-12-01 14:56:37.704726] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:04.799 2024/12/01 14:56:37 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:04.799 [2024-12-01 14:56:37.721981] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:04.799 [2024-12-01 14:56:37.722029] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:04.799 2024/12/01 14:56:37 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:04.799 [2024-12-01 14:56:37.737520] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:04.799 [2024-12-01 14:56:37.737569] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:04.799 2024/12/01 14:56:37 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:04.799 [2024-12-01 14:56:37.748945] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:04.799 [2024-12-01 14:56:37.748993] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:04.799 2024/12/01 14:56:37 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:04.799 [2024-12-01 14:56:37.764800] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:04.799 [2024-12-01 14:56:37.764832] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:04.799 2024/12/01 14:56:37 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:04.799 [2024-12-01 14:56:37.774672] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:04.799 [2024-12-01 14:56:37.774704] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:04.799 2024/12/01 14:56:37 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:04.799 [2024-12-01 14:56:37.788068] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:04.799 [2024-12-01 14:56:37.788117] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:04.799 2024/12/01 14:56:37 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:04.799 [2024-12-01 14:56:37.796869] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:04.799 [2024-12-01 14:56:37.796903] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:04.799 2024/12/01 14:56:37 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:04.799 [2024-12-01 14:56:37.807486] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:04.799 [2024-12-01 14:56:37.807535] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:04.799 2024/12/01 14:56:37 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:04.799 [2024-12-01 14:56:37.818508] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:04.799 [2024-12-01 14:56:37.818541] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:04.799 2024/12/01 14:56:37 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:04.799 [2024-12-01 14:56:37.832711] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:04.800 [2024-12-01 14:56:37.832744] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:04.800 2024/12/01 14:56:37 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:04.800 [2024-12-01 14:56:37.842380] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:04.800 [2024-12-01 14:56:37.842413] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:04.800 2024/12/01 14:56:37 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:04.800 [2024-12-01 14:56:37.854445] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:04.800 [2024-12-01 14:56:37.854477] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:04.800 2024/12/01 14:56:37 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:04.800 [2024-12-01 14:56:37.872512] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:04.800 [2024-12-01 14:56:37.872546] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:04.800 2024/12/01 14:56:37 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:04.800 [2024-12-01 14:56:37.888356] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:04.800 [2024-12-01 14:56:37.888389] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:04.800 2024/12/01 14:56:37 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:04.800 [2024-12-01 14:56:37.906933] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:04.800 [2024-12-01 14:56:37.906967] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:04.800 2024/12/01 14:56:37 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:05.059 [2024-12-01 14:56:37.918374] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:05.059 [2024-12-01 14:56:37.918408] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:05.059 2024/12/01 14:56:37 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:05.059 [2024-12-01 14:56:37.932919] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:05.059 [2024-12-01 14:56:37.932951] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:05.059 2024/12/01 14:56:37 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:05.059 [2024-12-01 14:56:37.943005] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:05.059 [2024-12-01 14:56:37.943039] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:05.059 2024/12/01 14:56:37 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:05.059 [2024-12-01 14:56:37.952643] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:05.059 [2024-12-01 14:56:37.952674] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:05.059 2024/12/01 14:56:37 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:05.059 [2024-12-01 14:56:37.962987] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:05.059 [2024-12-01 14:56:37.963019] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:05.059 2024/12/01 14:56:37 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:05.059 [2024-12-01 14:56:37.974501] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:05.059 [2024-12-01 14:56:37.974533] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:05.059 2024/12/01 14:56:37 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:05.059 [2024-12-01 14:56:37.983461] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:05.059 [2024-12-01 14:56:37.983492] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:05.059 2024/12/01 14:56:37 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:05.059 [2024-12-01 14:56:37.997245] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:05.059 [2024-12-01 14:56:37.997276] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:05.059 2024/12/01 14:56:38 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:05.059 [2024-12-01 14:56:38.006580] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:05.059 [2024-12-01 14:56:38.006611] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:05.059 2024/12/01 14:56:38 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:05.059 [2024-12-01 14:56:38.020788] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:05.059 [2024-12-01 14:56:38.020834] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:05.059 2024/12/01 14:56:38 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:05.059 [2024-12-01 14:56:38.029716] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:05.059 [2024-12-01 14:56:38.029747] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:05.059 2024/12/01 14:56:38 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:05.059 [2024-12-01 14:56:38.042630] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:05.059 [2024-12-01 14:56:38.042661] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:05.059 2024/12/01 14:56:38 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:05.059 [2024-12-01 14:56:38.058091] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:05.059 [2024-12-01 14:56:38.058138] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:05.059 2024/12/01 14:56:38 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:05.059 [2024-12-01 14:56:38.070034] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:05.059 [2024-12-01 14:56:38.070064] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:05.059 2024/12/01 14:56:38 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:05.059 [2024-12-01 14:56:38.086497] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:05.059 [2024-12-01 14:56:38.086528] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:05.059 2024/12/01 14:56:38 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:05.059 [2024-12-01 14:56:38.101830] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:05.059 [2024-12-01 14:56:38.101859] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:05.059 2024/12/01 14:56:38 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:05.059 [2024-12-01 14:56:38.111048] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:05.059 [2024-12-01 14:56:38.111094] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:05.059 2024/12/01 14:56:38 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:05.059 [2024-12-01 14:56:38.127062] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:05.059 [2024-12-01 14:56:38.127109] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:05.059 2024/12/01 14:56:38 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:05.059 [2024-12-01 14:56:38.137818] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:05.059 [2024-12-01 14:56:38.137864] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:05.059 2024/12/01 14:56:38 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:05.059 [2024-12-01 14:56:38.153329] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:05.059 [2024-12-01 14:56:38.153360] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:05.059 2024/12/01 14:56:38 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:05.059 [2024-12-01 14:56:38.162810] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:05.059 [2024-12-01 14:56:38.162856] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:05.059 2024/12/01 14:56:38 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:05.317 [2024-12-01 14:56:38.173253] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:05.317 [2024-12-01 14:56:38.173284] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:05.317 2024/12/01 14:56:38 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:05.317 [2024-12-01 14:56:38.182937] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:05.317 [2024-12-01 14:56:38.182966] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:05.318 2024/12/01 14:56:38 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:05.318 [2024-12-01 14:56:38.194680] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:05.318 [2024-12-01 14:56:38.194710] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:05.318 2024/12/01 14:56:38 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:05.318 [2024-12-01 14:56:38.206106] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:05.318 [2024-12-01 14:56:38.206137] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:05.318 2024/12/01 14:56:38 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:05.318 [2024-12-01 14:56:38.222327] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:05.318 [2024-12-01 14:56:38.222357] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:05.318 2024/12/01 14:56:38 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:05.318 [2024-12-01 14:56:38.232044] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:05.318 [2024-12-01 14:56:38.232078] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:05.318 2024/12/01 14:56:38 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:05.318 [2024-12-01 14:56:38.243265] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:05.318 [2024-12-01 14:56:38.243297] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:05.318 2024/12/01 14:56:38 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:05.318 [2024-12-01 14:56:38.254860] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:05.318 [2024-12-01 14:56:38.254920] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:05.318 2024/12/01 14:56:38 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:05.318 [2024-12-01 14:56:38.269008] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:05.318 [2024-12-01 14:56:38.269038] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:05.318 2024/12/01 14:56:38 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:05.318 [2024-12-01 14:56:38.278025] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:05.318 [2024-12-01 14:56:38.278072] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:05.318 2024/12/01 14:56:38 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:05.318 [2024-12-01 14:56:38.287845] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:05.318 [2024-12-01 14:56:38.287875] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:05.318 2024/12/01 14:56:38 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:05.318 [2024-12-01 14:56:38.297598] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:05.318 [2024-12-01 14:56:38.297628] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:05.318 2024/12/01 14:56:38 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:05.318 [2024-12-01 14:56:38.307161] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:05.318 [2024-12-01 14:56:38.307191] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:05.318 2024/12/01 14:56:38 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:05.318 [2024-12-01 14:56:38.316970] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:05.318 [2024-12-01 14:56:38.317015] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:05.318 2024/12/01 14:56:38 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:05.318 [2024-12-01 14:56:38.326912] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:05.318 [2024-12-01 14:56:38.326942] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:05.318 2024/12/01 14:56:38 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:05.318 [2024-12-01 14:56:38.339302] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:05.318 [2024-12-01 14:56:38.339333] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:05.318 2024/12/01 14:56:38 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:05.318 [2024-12-01 14:56:38.350678] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:05.318 [2024-12-01 14:56:38.350707] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:05.318 2024/12/01 14:56:38 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:05.318 [2024-12-01 14:56:38.367352] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:05.318 [2024-12-01 14:56:38.367383] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:05.318 2024/12/01 14:56:38 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:05.318 [2024-12-01 14:56:38.378556] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:05.318 [2024-12-01 14:56:38.378587] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:05.318 2024/12/01 14:56:38 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:05.318 [2024-12-01 14:56:38.387778] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:05.318 [2024-12-01 14:56:38.387808] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:05.318 2024/12/01 14:56:38 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:05.318 [2024-12-01 14:56:38.399610] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:05.318 [2024-12-01 14:56:38.399640] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:05.318 2024/12/01 14:56:38 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:05.318 [2024-12-01 14:56:38.417468] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:05.318 [2024-12-01 14:56:38.417498] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:05.318 2024/12/01 14:56:38 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:05.577 [2024-12-01 14:56:38.433398] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:05.577 [2024-12-01 14:56:38.433446] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:05.577 2024/12/01 14:56:38 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:05.577 [2024-12-01 14:56:38.450928] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:05.577 [2024-12-01 14:56:38.450959] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:05.577 2024/12/01 14:56:38 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:05.577 [2024-12-01 14:56:38.462287] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:05.577 [2024-12-01 14:56:38.462317] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:05.577 2024/12/01 14:56:38 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:05.577 [2024-12-01 14:56:38.477119] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:05.577 [2024-12-01 14:56:38.477150] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:05.577 2024/12/01 14:56:38 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:05.577 [2024-12-01 14:56:38.486273] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:05.577 [2024-12-01 14:56:38.486303] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:05.577 2024/12/01 14:56:38 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:05.577 [2024-12-01 14:56:38.502043] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:05.577 [2024-12-01 14:56:38.502072] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:05.577 2024/12/01 14:56:38 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:05.577 [2024-12-01 14:56:38.519200] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:05.577 [2024-12-01 14:56:38.519231] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:05.577 2024/12/01 14:56:38 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:05.577 [2024-12-01 14:56:38.535299] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:05.577 [2024-12-01 14:56:38.535329] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:05.577 2024/12/01 14:56:38 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:05.577 [2024-12-01 14:56:38.552221] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:05.577 [2024-12-01 14:56:38.552251] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:05.577 2024/12/01 14:56:38 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:05.577 [2024-12-01 14:56:38.567426] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:05.577 [2024-12-01 14:56:38.567456] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:05.577 2024/12/01 14:56:38 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:05.577 [2024-12-01 14:56:38.582423] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:05.577 [2024-12-01 14:56:38.582453] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:05.577 2024/12/01 14:56:38 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:05.577 [2024-12-01 14:56:38.591470] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:05.577 [2024-12-01 14:56:38.591500] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:05.578 2024/12/01 14:56:38 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:05.578 [2024-12-01 14:56:38.602770] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:05.578 [2024-12-01 14:56:38.602799] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:05.578 2024/12/01 14:56:38 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:05.578 [2024-12-01 14:56:38.618810] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:05.578 [2024-12-01 14:56:38.618840] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:05.578 2024/12/01 14:56:38 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:05.578 [2024-12-01 14:56:38.628207] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:05.578 [2024-12-01 14:56:38.628237] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:05.578 2024/12/01 14:56:38 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:05.578 [2024-12-01 14:56:38.637878] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:05.578 [2024-12-01 14:56:38.637909] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:05.578 2024/12/01 14:56:38 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:05.578 [2024-12-01 14:56:38.647906] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:05.578 [2024-12-01 14:56:38.647954] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:05.578 2024/12/01 14:56:38 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:05.578 [2024-12-01 14:56:38.657700] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:05.578 [2024-12-01 14:56:38.657730] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:05.578 2024/12/01 14:56:38 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:05.578 [2024-12-01 14:56:38.673194] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:05.578 [2024-12-01 14:56:38.673227] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:05.578 2024/12/01 14:56:38 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:05.578 [2024-12-01 14:56:38.682504] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:05.578 [2024-12-01 14:56:38.682535] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:05.578 2024/12/01 14:56:38 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:05.838 [2024-12-01 14:56:38.692476] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:05.838 [2024-12-01 14:56:38.692506] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:05.838 2024/12/01 14:56:38 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:05.838 [2024-12-01 14:56:38.702360] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:05.838 [2024-12-01 14:56:38.702392] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:05.838 2024/12/01 14:56:38 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:05.838 [2024-12-01 14:56:38.711956] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:05.838 [2024-12-01 14:56:38.711989] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:05.838 2024/12/01 14:56:38 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:05.838 [2024-12-01 14:56:38.721912] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:05.838 [2024-12-01 14:56:38.721945] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:05.838 2024/12/01 14:56:38 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:05.838 [2024-12-01 14:56:38.732199] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:05.838 [2024-12-01 14:56:38.732229] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:05.838 2024/12/01 14:56:38 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:05.838 [2024-12-01 14:56:38.744203] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:05.838 [2024-12-01 14:56:38.744250] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:05.838 2024/12/01 14:56:38 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:05.838 [2024-12-01 14:56:38.754569] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:05.838 [2024-12-01 14:56:38.754616] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:05.838 2024/12/01 14:56:38 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:05.838 [2024-12-01 14:56:38.764610] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:05.838 [2024-12-01 14:56:38.764657] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:05.838 2024/12/01 14:56:38 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:05.838 [2024-12-01 14:56:38.774726] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:05.838 [2024-12-01 14:56:38.774784] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:05.838 2024/12/01 14:56:38 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:05.838 [2024-12-01 14:56:38.784292] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:05.838 [2024-12-01 14:56:38.784322] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:05.838 2024/12/01 14:56:38 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:05.838 [2024-12-01 14:56:38.795486] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:05.838 [2024-12-01 14:56:38.795516] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:05.838 2024/12/01 14:56:38 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:05.838 [2024-12-01 14:56:38.804822] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:05.838 [2024-12-01 14:56:38.804851] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:05.838 2024/12/01 14:56:38 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:05.838 [2024-12-01 14:56:38.818142] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:05.838 [2024-12-01 14:56:38.818172] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:05.838 2024/12/01 14:56:38 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:05.838 [2024-12-01 14:56:38.826513] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:05.838 [2024-12-01 14:56:38.826543] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:05.838 2024/12/01 14:56:38 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:05.838 [2024-12-01 14:56:38.836722] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:05.838 [2024-12-01 14:56:38.836780] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:05.838 2024/12/01 14:56:38 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:05.838 [2024-12-01 14:56:38.850685] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:05.838 [2024-12-01 14:56:38.850716] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:05.838 2024/12/01 14:56:38 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:05.838 [2024-12-01 14:56:38.859854] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:05.838 [2024-12-01 14:56:38.859900] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:05.838 2024/12/01 14:56:38 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:05.838 [2024-12-01 14:56:38.870694] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:05.838 [2024-12-01 14:56:38.870724] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:05.838 2024/12/01 14:56:38 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:05.838 [2024-12-01 14:56:38.882183] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:05.838 [2024-12-01 14:56:38.882214] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:05.838 2024/12/01 14:56:38 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:05.838 [2024-12-01 14:56:38.891325] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:05.838 [2024-12-01 14:56:38.891355] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:05.838 2024/12/01 14:56:38 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:05.838 [2024-12-01 14:56:38.902538] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:05.838 [2024-12-01 14:56:38.902569] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:05.838 2024/12/01 14:56:38 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:05.838 [2024-12-01 14:56:38.911239] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:05.838 [2024-12-01 14:56:38.911269] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:05.838 2024/12/01 14:56:38 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:05.838 [2024-12-01 14:56:38.923658] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:05.838 [2024-12-01 14:56:38.923688] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:05.838 2024/12/01 14:56:38 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:05.838 [2024-12-01 14:56:38.942969] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:05.838 [2024-12-01 14:56:38.943005] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:05.838 2024/12/01 14:56:38 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:06.099 [2024-12-01 14:56:38.954286] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:06.099 [2024-12-01 14:56:38.954317] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:06.099 2024/12/01 14:56:38 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:06.099 [2024-12-01 14:56:38.964614] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:06.099 [2024-12-01 14:56:38.964644] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:06.099 2024/12/01 14:56:38 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:06.099 [2024-12-01 14:56:38.978953] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:06.099 [2024-12-01 14:56:38.978986] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:06.099 2024/12/01 14:56:38 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:06.099 [2024-12-01 14:56:38.988139] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:06.099 [2024-12-01 14:56:38.988189] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:06.099 2024/12/01 14:56:38 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:06.099 [2024-12-01 14:56:39.001264] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:06.099 [2024-12-01 14:56:39.001296] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:06.099 2024/12/01 14:56:39 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:06.099 [2024-12-01 14:56:39.011157] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:06.099 [2024-12-01 14:56:39.011188] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:06.099 2024/12/01 14:56:39 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:06.099 [2024-12-01 14:56:39.020883] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:06.099 [2024-12-01 14:56:39.020913] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:06.099 2024/12/01 14:56:39 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:06.099 [2024-12-01 14:56:39.030674] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:06.099 [2024-12-01 14:56:39.030705] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:06.099 2024/12/01 14:56:39 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:06.099 [2024-12-01 14:56:39.040466] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:06.099 [2024-12-01 14:56:39.040496] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:06.099 2024/12/01 14:56:39 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:06.099 [2024-12-01 14:56:39.050642] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:06.099 [2024-12-01 14:56:39.050673] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:06.099 2024/12/01 14:56:39 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:06.099 [2024-12-01 14:56:39.062904] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:06.099 [2024-12-01 14:56:39.062936] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:06.099 2024/12/01 14:56:39 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:06.099 [2024-12-01 14:56:39.080849] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:06.099 [2024-12-01 14:56:39.080880] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:06.099 2024/12/01 14:56:39 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:06.099 [2024-12-01 14:56:39.094474] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:06.099 [2024-12-01 14:56:39.094505] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:06.099 2024/12/01 14:56:39 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:06.099 [2024-12-01 14:56:39.111094] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:06.099 [2024-12-01 14:56:39.111126] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:06.099 2024/12/01 14:56:39 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:06.099 [2024-12-01 14:56:39.120381] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:06.099 [2024-12-01 14:56:39.120411] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:06.099 2024/12/01 14:56:39 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:06.099 [2024-12-01 14:56:39.130534] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:06.099 [2024-12-01 14:56:39.130566] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:06.099 2024/12/01 14:56:39 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:06.099 [2024-12-01 14:56:39.140672] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:06.099 [2024-12-01 14:56:39.140701] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:06.099 2024/12/01 14:56:39 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:06.099 [2024-12-01 14:56:39.150377] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:06.099 [2024-12-01 14:56:39.150411] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:06.099 2024/12/01 14:56:39 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:06.099 [2024-12-01 14:56:39.163843] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:06.099 [2024-12-01 14:56:39.163873] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:06.099 2024/12/01 14:56:39 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:06.099 [2024-12-01 14:56:39.173173] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:06.099 [2024-12-01 14:56:39.173204] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:06.099 2024/12/01 14:56:39 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:06.100 [2024-12-01 14:56:39.183164] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:06.100 [2024-12-01 14:56:39.183193] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:06.100 2024/12/01 14:56:39 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:06.100 [2024-12-01 14:56:39.193086] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:06.100 [2024-12-01 14:56:39.193125] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:06.100 2024/12/01 14:56:39 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:06.100 [2024-12-01 14:56:39.204993] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:06.100 [2024-12-01 14:56:39.205024] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:06.100 2024/12/01 14:56:39 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:06.359 [2024-12-01 14:56:39.220476] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:06.359 [2024-12-01 14:56:39.220508] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:06.359 2024/12/01 14:56:39 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:06.359 [2024-12-01 14:56:39.229745] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:06.359 [2024-12-01 14:56:39.229802] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:06.359 2024/12/01 14:56:39 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:06.359 [2024-12-01 14:56:39.244385] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:06.359 [2024-12-01 14:56:39.244415] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:06.359 2024/12/01 14:56:39 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:06.359 [2024-12-01 14:56:39.253904] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:06.359 [2024-12-01 14:56:39.253950] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:06.359 2024/12/01 14:56:39 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:06.359 [2024-12-01 14:56:39.270686] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:06.359 [2024-12-01 14:56:39.270739] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:06.359 2024/12/01 14:56:39 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:06.359 [2024-12-01 14:56:39.282858] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:06.359 [2024-12-01 14:56:39.282887] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:06.359 2024/12/01 14:56:39 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:06.359 [2024-12-01 14:56:39.294736] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:06.359 [2024-12-01 14:56:39.294777] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:06.360 2024/12/01 14:56:39 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:06.360 [2024-12-01 14:56:39.313217] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:06.360 [2024-12-01 14:56:39.313249] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:06.360 2024/12/01 14:56:39 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:06.360 [2024-12-01 14:56:39.326417] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:06.360 [2024-12-01 14:56:39.326447] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:06.360 2024/12/01 14:56:39 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:06.360 [2024-12-01 14:56:39.342142] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:06.360 [2024-12-01 14:56:39.342174] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:06.360 2024/12/01 14:56:39 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:06.360 [2024-12-01 14:56:39.358890] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:06.360 [2024-12-01 14:56:39.358920] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:06.360 2024/12/01 14:56:39 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:06.360 [2024-12-01 14:56:39.368358] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:06.360 [2024-12-01 14:56:39.368388] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:06.360 2024/12/01 14:56:39 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:06.360 [2024-12-01 14:56:39.382298] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:06.360 [2024-12-01 14:56:39.382329] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:06.360 2024/12/01 14:56:39 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:06.360 [2024-12-01 14:56:39.391394] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:06.360 [2024-12-01 14:56:39.391424] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:06.360 2024/12/01 14:56:39 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:06.360 [2024-12-01 14:56:39.401581] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:06.360 [2024-12-01 14:56:39.401611] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:06.360 2024/12/01 14:56:39 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:06.360 [2024-12-01 14:56:39.414092] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:06.360 [2024-12-01 14:56:39.414125] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:06.360 2024/12/01 14:56:39 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:06.360 [2024-12-01 14:56:39.425984] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:06.360 [2024-12-01 14:56:39.426014] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:06.360 2024/12/01 14:56:39 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:06.360 [2024-12-01 14:56:39.441011] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:06.360 [2024-12-01 14:56:39.441043] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:06.360 2024/12/01 14:56:39 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:06.360 [2024-12-01 14:56:39.451861] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:06.360 [2024-12-01 14:56:39.451907] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:06.360 2024/12/01 14:56:39 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:06.360 [2024-12-01 14:56:39.460256] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:06.360 [2024-12-01 14:56:39.460285] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:06.360 2024/12/01 14:56:39 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:06.619 [2024-12-01 14:56:39.473037] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:06.619 [2024-12-01 14:56:39.473084] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:06.619 2024/12/01 14:56:39 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:06.619 [2024-12-01 14:56:39.483101] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:06.619 [2024-12-01 14:56:39.483131] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:06.619 2024/12/01 14:56:39 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:06.619 [2024-12-01 14:56:39.492773] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:06.619 [2024-12-01 14:56:39.492818] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:06.619 2024/12/01 14:56:39 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:06.619 [2024-12-01 14:56:39.502623] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:06.619 [2024-12-01 14:56:39.502654] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:06.619 2024/12/01 14:56:39 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:06.619 [2024-12-01 14:56:39.512566] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:06.619 [2024-12-01 14:56:39.512597] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:06.620 2024/12/01 14:56:39 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:06.620 [2024-12-01 14:56:39.522566] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:06.620 [2024-12-01 14:56:39.522595] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:06.620 2024/12/01 14:56:39 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:06.620 [2024-12-01 14:56:39.532445] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:06.620 [2024-12-01 14:56:39.532475] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:06.620 2024/12/01 14:56:39 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:06.620 [2024-12-01 14:56:39.542350] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:06.620 [2024-12-01 14:56:39.542379] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:06.620 2024/12/01 14:56:39 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:06.620 [2024-12-01 14:56:39.552009] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:06.620 [2024-12-01 14:56:39.552039] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:06.620 2024/12/01 14:56:39 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:06.620 [2024-12-01 14:56:39.562289] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:06.620 [2024-12-01 14:56:39.562318] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:06.620 2024/12/01 14:56:39 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:06.620 [2024-12-01 14:56:39.574267] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:06.620 [2024-12-01 14:56:39.574298] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:06.620 2024/12/01 14:56:39 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:06.620 [2024-12-01 14:56:39.585429] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:06.620 [2024-12-01 14:56:39.585459] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:06.620 2024/12/01 14:56:39 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:06.620 [2024-12-01 14:56:39.594335] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:06.620 [2024-12-01 14:56:39.594365] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:06.620 2024/12/01 14:56:39 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:06.620 [2024-12-01 14:56:39.606318] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:06.620 [2024-12-01 14:56:39.606349] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:06.620 2024/12/01 14:56:39 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:06.620 [2024-12-01 14:56:39.615977] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:06.620 [2024-12-01 14:56:39.616011] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:06.620 2024/12/01 14:56:39 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:06.620 [2024-12-01 14:56:39.630346] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:06.620 [2024-12-01 14:56:39.630376] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:06.620 2024/12/01 14:56:39 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:06.620 [2024-12-01 14:56:39.641468] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:06.620 [2024-12-01 14:56:39.641500] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:06.620 2024/12/01 14:56:39 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:06.620 [2024-12-01 14:56:39.650264] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:06.620 [2024-12-01 14:56:39.650294] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:06.620 2024/12/01 14:56:39 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:06.620 [2024-12-01 14:56:39.660986] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:06.620 [2024-12-01 14:56:39.661017] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:06.620 2024/12/01 14:56:39 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:06.620 [2024-12-01 14:56:39.672692] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:06.620 [2024-12-01 14:56:39.672722] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:06.620 2024/12/01 14:56:39 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:06.620 [2024-12-01 14:56:39.681729] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:06.620 [2024-12-01 14:56:39.681789] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:06.620 2024/12/01 14:56:39 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:06.620 [2024-12-01 14:56:39.691986] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:06.620 [2024-12-01 14:56:39.692019] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:06.620 2024/12/01 14:56:39 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:06.620 [2024-12-01 14:56:39.708814] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:06.620 [2024-12-01 14:56:39.708844] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:06.620 2024/12/01 14:56:39 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:06.620 [2024-12-01 14:56:39.719680] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:06.620 [2024-12-01 14:56:39.719710] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:06.620 2024/12/01 14:56:39 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:06.620 [2024-12-01 14:56:39.728349] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:06.620 [2024-12-01 14:56:39.728378] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:06.620 2024/12/01 14:56:39 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:06.880 [2024-12-01 14:56:39.738623] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:06.880 [2024-12-01 14:56:39.738653] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:06.880 2024/12/01 14:56:39 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:06.880 [2024-12-01 14:56:39.748547] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:06.880 [2024-12-01 14:56:39.748594] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:06.880 2024/12/01 14:56:39 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:06.880 [2024-12-01 14:56:39.758348] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:06.880 [2024-12-01 14:56:39.758395] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:06.880 2024/12/01 14:56:39 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:06.880 [2024-12-01 14:56:39.768451] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:06.880 [2024-12-01 14:56:39.768498] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:06.880 2024/12/01 14:56:39 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:06.880 [2024-12-01 14:56:39.778318] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:06.880 [2024-12-01 14:56:39.778349] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:06.880 2024/12/01 14:56:39 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:06.880 [2024-12-01 14:56:39.789578] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:06.880 [2024-12-01 14:56:39.789610] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:06.880 2024/12/01 14:56:39 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:06.880 [2024-12-01 14:56:39.799058] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:06.880 [2024-12-01 14:56:39.799088] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:06.880 2024/12/01 14:56:39 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:06.880 [2024-12-01 14:56:39.812177] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:06.880 [2024-12-01 14:56:39.812206] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:06.880 2024/12/01 14:56:39 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:06.880 [2024-12-01 14:56:39.821144] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:06.880 [2024-12-01 14:56:39.821174] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:06.880 2024/12/01 14:56:39 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:06.880 [2024-12-01 14:56:39.834735] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:06.880 [2024-12-01 14:56:39.834775] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:06.880 2024/12/01 14:56:39 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:06.880 [2024-12-01 14:56:39.843534] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:06.880 [2024-12-01 14:56:39.843564] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:06.881 2024/12/01 14:56:39 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:06.881 [2024-12-01 14:56:39.855210] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:06.881 [2024-12-01 14:56:39.855240] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:06.881 2024/12/01 14:56:39 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:06.881 [2024-12-01 14:56:39.864587] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:06.881 [2024-12-01 14:56:39.864616] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:06.881 2024/12/01 14:56:39 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:06.881 [2024-12-01 14:56:39.877865] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:06.881 [2024-12-01 14:56:39.877895] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:06.881 2024/12/01 14:56:39 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:06.881 [2024-12-01 14:56:39.886604] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:06.881 [2024-12-01 14:56:39.886634] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:06.881 2024/12/01 14:56:39 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:06.881 [2024-12-01 14:56:39.897062] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:06.881 [2024-12-01 14:56:39.897118] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:06.881 2024/12/01 14:56:39 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:06.881 [2024-12-01 14:56:39.908617] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:06.881 [2024-12-01 14:56:39.908647] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:06.881 2024/12/01 14:56:39 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:06.881 [2024-12-01 14:56:39.917666] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:06.881 [2024-12-01 14:56:39.917696] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:06.881 2024/12/01 14:56:39 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:06.881 [2024-12-01 14:56:39.931044] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:06.881 [2024-12-01 14:56:39.931074] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:06.881 2024/12/01 14:56:39 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:06.881 [2024-12-01 14:56:39.939731] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:06.881 [2024-12-01 14:56:39.939788] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:06.881 2024/12/01 14:56:39 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:06.881 [2024-12-01 14:56:39.950031] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:06.881 [2024-12-01 14:56:39.950079] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:06.881 2024/12/01 14:56:39 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:06.881 [2024-12-01 14:56:39.960528] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:06.881 [2024-12-01 14:56:39.960560] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:06.881 2024/12/01 14:56:39 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:06.881 [2024-12-01 14:56:39.974950] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:06.881 [2024-12-01 14:56:39.974985] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:06.881 2024/12/01 14:56:39 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:06.881 [2024-12-01 14:56:39.985546] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:06.881 [2024-12-01 14:56:39.985576] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:06.881 2024/12/01 14:56:39 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:07.140 [2024-12-01 14:56:39.995635] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:07.140 [2024-12-01 14:56:39.995666] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:07.140 2024/12/01 14:56:39 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:07.140 [2024-12-01 14:56:40.007074] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:07.140 [2024-12-01 14:56:40.007139] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:07.140 2024/12/01 14:56:40 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:07.140 [2024-12-01 14:56:40.017696] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:07.140 [2024-12-01 14:56:40.017745] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:07.141 2024/12/01 14:56:40 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:07.141 [2024-12-01 14:56:40.030229] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:07.141 [2024-12-01 14:56:40.030278] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:07.141 2024/12/01 14:56:40 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:07.141 [2024-12-01 14:56:40.046034] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:07.141 [2024-12-01 14:56:40.046084] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:07.141 2024/12/01 14:56:40 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:07.141 [2024-12-01 14:56:40.061540] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:07.141 [2024-12-01 14:56:40.061587] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:07.141 2024/12/01 14:56:40 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:07.141 [2024-12-01 14:56:40.071440] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:07.141 [2024-12-01 14:56:40.071487] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:07.141 2024/12/01 14:56:40 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:07.141 [2024-12-01 14:56:40.082444] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:07.141 [2024-12-01 14:56:40.082475] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:07.141 2024/12/01 14:56:40 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:07.141 [2024-12-01 14:56:40.094429] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:07.141 [2024-12-01 14:56:40.094463] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:07.141 2024/12/01 14:56:40 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:07.141 [2024-12-01 14:56:40.106011] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:07.141 [2024-12-01 14:56:40.106042] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:07.141 2024/12/01 14:56:40 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:07.141 [2024-12-01 14:56:40.115371] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:07.141 [2024-12-01 14:56:40.115401] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:07.141 2024/12/01 14:56:40 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:07.141 [2024-12-01 14:56:40.126151] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:07.141 [2024-12-01 14:56:40.126182] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:07.141 2024/12/01 14:56:40 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:07.141 [2024-12-01 14:56:40.137088] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:07.141 [2024-12-01 14:56:40.137145] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:07.141 2024/12/01 14:56:40 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:07.141 [2024-12-01 14:56:40.153581] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:07.141 [2024-12-01 14:56:40.153629] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:07.141 2024/12/01 14:56:40 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:07.141 [2024-12-01 14:56:40.168804] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:07.141 [2024-12-01 14:56:40.168866] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:07.141 2024/12/01 14:56:40 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:07.141 [2024-12-01 14:56:40.178830] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:07.141 [2024-12-01 14:56:40.178878] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:07.141 2024/12/01 14:56:40 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:07.141 [2024-12-01 14:56:40.190252] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:07.141 [2024-12-01 14:56:40.190298] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:07.141 2024/12/01 14:56:40 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:07.141 [2024-12-01 14:56:40.200817] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:07.141 [2024-12-01 14:56:40.200864] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:07.141 2024/12/01 14:56:40 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:07.141 [2024-12-01 14:56:40.210997] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:07.141 [2024-12-01 14:56:40.211031] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:07.141 2024/12/01 14:56:40 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:07.141 [2024-12-01 14:56:40.221370] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:07.141 [2024-12-01 14:56:40.221402] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:07.141 2024/12/01 14:56:40 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:07.141 [2024-12-01 14:56:40.231923] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:07.141 [2024-12-01 14:56:40.231957] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:07.141 2024/12/01 14:56:40 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:07.141 [2024-12-01 14:56:40.249250] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:07.141 [2024-12-01 14:56:40.249281] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:07.141 2024/12/01 14:56:40 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:07.401 [2024-12-01 14:56:40.265158] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:07.402 [2024-12-01 14:56:40.265190] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:07.402 2024/12/01 14:56:40 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:07.402 [2024-12-01 14:56:40.282832] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:07.402 [2024-12-01 14:56:40.282864] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:07.402 2024/12/01 14:56:40 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:07.402 [2024-12-01 14:56:40.297462] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:07.402 [2024-12-01 14:56:40.297509] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:07.402 2024/12/01 14:56:40 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:07.402 [2024-12-01 14:56:40.311196] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:07.402 [2024-12-01 14:56:40.311227] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:07.402 2024/12/01 14:56:40 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:07.402 [2024-12-01 14:56:40.328338] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:07.402 [2024-12-01 14:56:40.328368] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:07.402 2024/12/01 14:56:40 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:07.402 [2024-12-01 14:56:40.337998] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:07.402 [2024-12-01 14:56:40.338045] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:07.402 2024/12/01 14:56:40 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:07.402 [2024-12-01 14:56:40.352406] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:07.402 [2024-12-01 14:56:40.352436] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:07.402 2024/12/01 14:56:40 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:07.402 [2024-12-01 14:56:40.363348] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:07.402 [2024-12-01 14:56:40.363378] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:07.402 2024/12/01 14:56:40 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:07.402 [2024-12-01 14:56:40.372216] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:07.402 [2024-12-01 14:56:40.372245] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:07.402 2024/12/01 14:56:40 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:07.402 [2024-12-01 14:56:40.383150] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:07.402 [2024-12-01 14:56:40.383180] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:07.402 2024/12/01 14:56:40 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:07.402 [2024-12-01 14:56:40.395118] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:07.402 [2024-12-01 14:56:40.395148] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:07.402 2024/12/01 14:56:40 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:07.402 [2024-12-01 14:56:40.405054] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:07.402 [2024-12-01 14:56:40.405086] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:07.402 2024/12/01 14:56:40 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:07.402 [2024-12-01 14:56:40.415285] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:07.402 [2024-12-01 14:56:40.415316] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:07.402 2024/12/01 14:56:40 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:07.402 [2024-12-01 14:56:40.429605] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:07.402 [2024-12-01 14:56:40.429635] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:07.402 2024/12/01 14:56:40 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:07.402 [2024-12-01 14:56:40.447054] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:07.402 [2024-12-01 14:56:40.447084] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:07.402 2024/12/01 14:56:40 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:07.402 [2024-12-01 14:56:40.463321] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:07.402 [2024-12-01 14:56:40.463352] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:07.402 2024/12/01 14:56:40 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:07.402 [2024-12-01 14:56:40.479875] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:07.402 [2024-12-01 14:56:40.479905] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:07.402 2024/12/01 14:56:40 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:07.402 [2024-12-01 14:56:40.495482] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:07.402 [2024-12-01 14:56:40.495512] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:07.402 2024/12/01 14:56:40 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:07.402 [2024-12-01 14:56:40.504602] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:07.402 [2024-12-01 14:56:40.504632] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:07.402 2024/12/01 14:56:40 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:07.662 [2024-12-01 14:56:40.516091] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:07.662 [2024-12-01 14:56:40.516122] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:07.662 2024/12/01 14:56:40 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:07.662 [2024-12-01 14:56:40.524679] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:07.662 [2024-12-01 14:56:40.524709] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:07.662 2024/12/01 14:56:40 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:07.662 [2024-12-01 14:56:40.535793] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:07.662 [2024-12-01 14:56:40.535822] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:07.662 2024/12/01 14:56:40 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:07.662 [2024-12-01 14:56:40.545830] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:07.662 [2024-12-01 14:56:40.545858] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:07.662 2024/12/01 14:56:40 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:07.662 [2024-12-01 14:56:40.555372] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:07.662 [2024-12-01 14:56:40.555402] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:07.662 2024/12/01 14:56:40 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:07.662 [2024-12-01 14:56:40.565060] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:07.662 [2024-12-01 14:56:40.565116] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:07.662 2024/12/01 14:56:40 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:07.662 [2024-12-01 14:56:40.574602] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:07.662 [2024-12-01 14:56:40.574632] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:07.662 2024/12/01 14:56:40 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:07.662 [2024-12-01 14:56:40.584525] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:07.662 [2024-12-01 14:56:40.584555] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:07.662 2024/12/01 14:56:40 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:07.662 [2024-12-01 14:56:40.594888] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:07.662 [2024-12-01 14:56:40.594917] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:07.662 2024/12/01 14:56:40 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:07.662 [2024-12-01 14:56:40.604504] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:07.662 [2024-12-01 14:56:40.604534] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:07.662 2024/12/01 14:56:40 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:07.662 [2024-12-01 14:56:40.614215] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:07.662 [2024-12-01 14:56:40.614244] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:07.662 2024/12/01 14:56:40 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:07.662 [2024-12-01 14:56:40.624180] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:07.662 [2024-12-01 14:56:40.624211] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:07.662 2024/12/01 14:56:40 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:07.662 [2024-12-01 14:56:40.634147] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:07.662 [2024-12-01 14:56:40.634175] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:07.662 2024/12/01 14:56:40 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:07.662 [2024-12-01 14:56:40.645680] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:07.662 [2024-12-01 14:56:40.645710] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:07.662 2024/12/01 14:56:40 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:07.662 [2024-12-01 14:56:40.656810] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:07.662 [2024-12-01 14:56:40.656840] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:07.662 2024/12/01 14:56:40 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:07.662 [2024-12-01 14:56:40.672571] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:07.662 [2024-12-01 14:56:40.672601] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:07.663 2024/12/01 14:56:40 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:07.663 [2024-12-01 14:56:40.683334] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:07.663 [2024-12-01 14:56:40.683363] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:07.663 2024/12/01 14:56:40 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:07.663 [2024-12-01 14:56:40.691634] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:07.663 [2024-12-01 14:56:40.691664] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:07.663 2024/12/01 14:56:40 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:07.663 [2024-12-01 14:56:40.703573] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:07.663 [2024-12-01 14:56:40.703605] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:07.663 2024/12/01 14:56:40 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:07.663 [2024-12-01 14:56:40.713297] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:07.663 [2024-12-01 14:56:40.713331] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:07.663 2024/12/01 14:56:40 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:07.663 [2024-12-01 14:56:40.723225] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:07.663 [2024-12-01 14:56:40.723256] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:07.663 2024/12/01 14:56:40 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:07.663 [2024-12-01 14:56:40.733159] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:07.663 [2024-12-01 14:56:40.733189] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:07.663 2024/12/01 14:56:40 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:07.663 [2024-12-01 14:56:40.743695] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:07.663 [2024-12-01 14:56:40.743725] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:07.663 2024/12/01 14:56:40 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:07.663 [2024-12-01 14:56:40.753672] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:07.663 [2024-12-01 14:56:40.753702] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:07.663 2024/12/01 14:56:40 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:07.663 [2024-12-01 14:56:40.763591] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:07.663 [2024-12-01 14:56:40.763621] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:07.663 2024/12/01 14:56:40 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:07.663 [2024-12-01 14:56:40.773642] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:07.663 [2024-12-01 14:56:40.773673] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:07.663 2024/12/01 14:56:40 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:07.922 [2024-12-01 14:56:40.785380] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:07.922 [2024-12-01 14:56:40.785410] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:07.922 2024/12/01 14:56:40 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:07.922 [2024-12-01 14:56:40.794627] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:07.922 [2024-12-01 14:56:40.794656] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:07.922 2024/12/01 14:56:40 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:07.923 [2024-12-01 14:56:40.804973] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:07.923 [2024-12-01 14:56:40.805019] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:07.923 2024/12/01 14:56:40 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:07.923 [2024-12-01 14:56:40.816431] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:07.923 [2024-12-01 14:56:40.816461] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:07.923 2024/12/01 14:56:40 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:07.923 [2024-12-01 14:56:40.825497] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:07.923 [2024-12-01 14:56:40.825528] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:07.923 2024/12/01 14:56:40 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:07.923 [2024-12-01 14:56:40.835553] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:07.923 [2024-12-01 14:56:40.835584] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:07.923 2024/12/01 14:56:40 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:07.923 [2024-12-01 14:56:40.845361] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:07.923 [2024-12-01 14:56:40.845424] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:07.923 2024/12/01 14:56:40 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:07.923 [2024-12-01 14:56:40.861104] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:07.923 [2024-12-01 14:56:40.861165] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:07.923 2024/12/01 14:56:40 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:07.923 [2024-12-01 14:56:40.877643] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:07.923 [2024-12-01 14:56:40.877674] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:07.923 2024/12/01 14:56:40 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:07.923 [2024-12-01 14:56:40.894385] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:07.923 [2024-12-01 14:56:40.894416] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:07.923 2024/12/01 14:56:40 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:07.923 [2024-12-01 14:56:40.904103] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:07.923 [2024-12-01 14:56:40.904148] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:07.923 2024/12/01 14:56:40 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:07.923 [2024-12-01 14:56:40.917254] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:07.923 [2024-12-01 14:56:40.917288] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:07.923 2024/12/01 14:56:40 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:07.923 [2024-12-01 14:56:40.926548] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:07.923 [2024-12-01 14:56:40.926578] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:07.923 2024/12/01 14:56:40 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:07.923 [2024-12-01 14:56:40.936374] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:07.923 [2024-12-01 14:56:40.936403] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:07.923 2024/12/01 14:56:40 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:07.923 [2024-12-01 14:56:40.946512] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:07.923 [2024-12-01 14:56:40.946542] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:07.923 2024/12/01 14:56:40 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:07.923 [2024-12-01 14:56:40.956188] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:07.923 [2024-12-01 14:56:40.956218] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:07.923 2024/12/01 14:56:40 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:07.923 [2024-12-01 14:56:40.967645] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:07.923 [2024-12-01 14:56:40.967676] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:07.923 2024/12/01 14:56:40 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:07.923 [2024-12-01 14:56:40.978066] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:07.923 [2024-12-01 14:56:40.978097] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:07.923 2024/12/01 14:56:40 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:07.923 [2024-12-01 14:56:40.988950] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:07.923 [2024-12-01 14:56:40.988981] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:07.923 2024/12/01 14:56:40 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:07.923 [2024-12-01 14:56:41.000380] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:07.923 [2024-12-01 14:56:41.000411] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:07.923 2024/12/01 14:56:41 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:07.923 [2024-12-01 14:56:41.014964] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:07.923 [2024-12-01 14:56:41.014999] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:07.923 2024/12/01 14:56:41 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:07.923 [2024-12-01 14:56:41.024971] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:07.924 [2024-12-01 14:56:41.025018] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:07.924 2024/12/01 14:56:41 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:08.184 [2024-12-01 14:56:41.038987] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:08.184 [2024-12-01 14:56:41.039019] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:08.184 2024/12/01 14:56:41 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:08.184 [2024-12-01 14:56:41.048300] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:08.184 [2024-12-01 14:56:41.048330] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:08.184 2024/12/01 14:56:41 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:08.184 [2024-12-01 14:56:41.058465] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:08.184 [2024-12-01 14:56:41.058497] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:08.184 2024/12/01 14:56:41 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:08.184 [2024-12-01 14:56:41.068602] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:08.184 [2024-12-01 14:56:41.068634] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:08.184 2024/12/01 14:56:41 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:08.184 [2024-12-01 14:56:41.078382] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:08.184 [2024-12-01 14:56:41.078414] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:08.184 2024/12/01 14:56:41 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:08.184 [2024-12-01 14:56:41.088336] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:08.184 [2024-12-01 14:56:41.088367] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:08.184 2024/12/01 14:56:41 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:08.184 [2024-12-01 14:56:41.098560] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:08.184 [2024-12-01 14:56:41.098592] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:08.184 2024/12/01 14:56:41 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:08.184 [2024-12-01 14:56:41.110724] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:08.184 [2024-12-01 14:56:41.110770] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:08.184 2024/12/01 14:56:41 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:08.184 00:16:08.184 Latency(us) 00:16:08.184 [2024-12-01T14:56:41.299Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:08.184 [2024-12-01T14:56:41.299Z] Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:16:08.184 Nvme1n1 : 5.01 12722.85 99.40 0.00 0.00 10049.52 3708.74 20494.89 00:16:08.184 [2024-12-01T14:56:41.299Z] =================================================================================================================== 00:16:08.184 [2024-12-01T14:56:41.299Z] Total : 12722.85 99.40 0.00 0.00 10049.52 3708.74 20494.89 00:16:08.184 [2024-12-01 14:56:41.122644] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:08.184 [2024-12-01 14:56:41.122690] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:08.184 2024/12/01 14:56:41 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:08.184 [2024-12-01 14:56:41.130641] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:08.184 [2024-12-01 14:56:41.130686] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:08.184 2024/12/01 14:56:41 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:08.184 [2024-12-01 14:56:41.142656] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:08.184 [2024-12-01 14:56:41.142685] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:08.184 2024/12/01 14:56:41 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:08.184 [2024-12-01 14:56:41.150633] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:08.184 [2024-12-01 14:56:41.150674] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:08.184 2024/12/01 14:56:41 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:08.184 [2024-12-01 14:56:41.158668] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:08.184 [2024-12-01 14:56:41.158697] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:08.184 2024/12/01 14:56:41 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:08.184 [2024-12-01 14:56:41.166641] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:08.184 [2024-12-01 14:56:41.166669] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:08.184 2024/12/01 14:56:41 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:08.184 [2024-12-01 14:56:41.174654] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:08.184 [2024-12-01 14:56:41.174681] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:08.184 2024/12/01 14:56:41 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:08.184 [2024-12-01 14:56:41.182641] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:08.184 [2024-12-01 14:56:41.182682] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:08.184 2024/12/01 14:56:41 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:08.184 [2024-12-01 14:56:41.190643] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:08.184 [2024-12-01 14:56:41.190684] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:08.184 2024/12/01 14:56:41 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:08.184 [2024-12-01 14:56:41.198646] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:08.184 [2024-12-01 14:56:41.198670] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:08.185 2024/12/01 14:56:41 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:08.185 [2024-12-01 14:56:41.206663] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:08.185 [2024-12-01 14:56:41.206691] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:08.185 2024/12/01 14:56:41 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:08.185 [2024-12-01 14:56:41.214651] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:08.185 [2024-12-01 14:56:41.214679] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:08.185 2024/12/01 14:56:41 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:08.185 [2024-12-01 14:56:41.222659] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:08.185 [2024-12-01 14:56:41.222685] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:08.185 2024/12/01 14:56:41 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:08.185 [2024-12-01 14:56:41.230671] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:08.185 [2024-12-01 14:56:41.230699] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:08.185 2024/12/01 14:56:41 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:08.185 [2024-12-01 14:56:41.238673] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:08.185 [2024-12-01 14:56:41.238701] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:08.185 2024/12/01 14:56:41 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:08.185 [2024-12-01 14:56:41.246674] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:08.185 [2024-12-01 14:56:41.246703] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:08.185 2024/12/01 14:56:41 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:08.185 [2024-12-01 14:56:41.254657] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:08.185 [2024-12-01 14:56:41.254699] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:08.185 2024/12/01 14:56:41 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:08.185 [2024-12-01 14:56:41.262660] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:08.185 [2024-12-01 14:56:41.262701] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:08.185 2024/12/01 14:56:41 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:08.185 [2024-12-01 14:56:41.270662] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:08.185 [2024-12-01 14:56:41.270702] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:08.185 2024/12/01 14:56:41 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:08.185 [2024-12-01 14:56:41.278684] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:08.185 [2024-12-01 14:56:41.278713] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:08.185 2024/12/01 14:56:41 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:08.185 [2024-12-01 14:56:41.286665] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:08.185 [2024-12-01 14:56:41.286706] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:08.185 2024/12/01 14:56:41 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:08.185 [2024-12-01 14:56:41.294687] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:08.185 [2024-12-01 14:56:41.294715] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:08.444 2024/12/01 14:56:41 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:08.444 /home/vagrant/spdk_repo/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (86387) - No such process 00:16:08.444 14:56:41 -- target/zcopy.sh@49 -- # wait 86387 00:16:08.444 14:56:41 -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:08.444 14:56:41 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:08.444 14:56:41 -- common/autotest_common.sh@10 -- # set +x 00:16:08.444 14:56:41 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:08.444 14:56:41 -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:16:08.444 14:56:41 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:08.444 14:56:41 -- common/autotest_common.sh@10 -- # set +x 00:16:08.444 delay0 00:16:08.444 14:56:41 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:08.444 14:56:41 -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:16:08.444 14:56:41 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:08.444 14:56:41 -- common/autotest_common.sh@10 -- # set +x 00:16:08.444 14:56:41 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:08.444 14:56:41 -- target/zcopy.sh@56 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:16:08.444 [2024-12-01 14:56:41.495474] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:16:15.040 Initializing NVMe Controllers 00:16:15.040 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:16:15.040 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:16:15.040 Initialization complete. Launching workers. 00:16:15.040 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 320, failed: 72 00:16:15.040 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 359, failed to submit 33 00:16:15.040 success 170, unsuccess 189, failed 0 00:16:15.040 14:56:47 -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:16:15.040 14:56:47 -- target/zcopy.sh@60 -- # nvmftestfini 00:16:15.040 14:56:47 -- nvmf/common.sh@476 -- # nvmfcleanup 00:16:15.040 14:56:47 -- nvmf/common.sh@116 -- # sync 00:16:15.040 14:56:47 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:16:15.040 14:56:47 -- nvmf/common.sh@119 -- # set +e 00:16:15.040 14:56:47 -- nvmf/common.sh@120 -- # for i in {1..20} 00:16:15.040 14:56:47 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:16:15.040 rmmod nvme_tcp 00:16:15.040 rmmod nvme_fabrics 00:16:15.040 rmmod nvme_keyring 00:16:15.040 14:56:47 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:16:15.040 14:56:47 -- nvmf/common.sh@123 -- # set -e 00:16:15.040 14:56:47 -- nvmf/common.sh@124 -- # return 0 00:16:15.040 14:56:47 -- nvmf/common.sh@477 -- # '[' -n 86219 ']' 00:16:15.040 14:56:47 -- nvmf/common.sh@478 -- # killprocess 86219 00:16:15.040 14:56:47 -- common/autotest_common.sh@936 -- # '[' -z 86219 ']' 00:16:15.040 14:56:47 -- common/autotest_common.sh@940 -- # kill -0 86219 00:16:15.040 14:56:47 -- common/autotest_common.sh@941 -- # uname 00:16:15.040 14:56:47 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:16:15.040 14:56:47 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 86219 00:16:15.040 14:56:47 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:16:15.040 14:56:47 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:16:15.040 killing process with pid 86219 00:16:15.040 14:56:47 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 86219' 00:16:15.040 14:56:47 -- common/autotest_common.sh@955 -- # kill 86219 00:16:15.040 14:56:47 -- common/autotest_common.sh@960 -- # wait 86219 00:16:15.040 14:56:47 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:16:15.040 14:56:47 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:16:15.040 14:56:47 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:16:15.040 14:56:47 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:15.040 14:56:47 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:16:15.040 14:56:47 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:15.040 14:56:47 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:15.040 14:56:47 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:15.040 14:56:48 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:16:15.040 00:16:15.040 real 0m24.655s 00:16:15.040 user 0m38.287s 00:16:15.040 sys 0m7.568s 00:16:15.040 14:56:48 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:16:15.040 14:56:48 -- common/autotest_common.sh@10 -- # set +x 00:16:15.040 ************************************ 00:16:15.040 END TEST nvmf_zcopy 00:16:15.040 ************************************ 00:16:15.040 14:56:48 -- nvmf/nvmf.sh@53 -- # run_test nvmf_nmic /home/vagrant/spdk_repo/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:16:15.040 14:56:48 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:16:15.041 14:56:48 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:16:15.041 14:56:48 -- common/autotest_common.sh@10 -- # set +x 00:16:15.041 ************************************ 00:16:15.041 START TEST nvmf_nmic 00:16:15.041 ************************************ 00:16:15.041 14:56:48 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:16:15.300 * Looking for test storage... 00:16:15.300 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:16:15.300 14:56:48 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:16:15.300 14:56:48 -- common/autotest_common.sh@1690 -- # lcov --version 00:16:15.300 14:56:48 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:16:15.300 14:56:48 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:16:15.300 14:56:48 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:16:15.300 14:56:48 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:16:15.300 14:56:48 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:16:15.300 14:56:48 -- scripts/common.sh@335 -- # IFS=.-: 00:16:15.300 14:56:48 -- scripts/common.sh@335 -- # read -ra ver1 00:16:15.300 14:56:48 -- scripts/common.sh@336 -- # IFS=.-: 00:16:15.300 14:56:48 -- scripts/common.sh@336 -- # read -ra ver2 00:16:15.300 14:56:48 -- scripts/common.sh@337 -- # local 'op=<' 00:16:15.300 14:56:48 -- scripts/common.sh@339 -- # ver1_l=2 00:16:15.300 14:56:48 -- scripts/common.sh@340 -- # ver2_l=1 00:16:15.300 14:56:48 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:16:15.300 14:56:48 -- scripts/common.sh@343 -- # case "$op" in 00:16:15.300 14:56:48 -- scripts/common.sh@344 -- # : 1 00:16:15.300 14:56:48 -- scripts/common.sh@363 -- # (( v = 0 )) 00:16:15.300 14:56:48 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:15.300 14:56:48 -- scripts/common.sh@364 -- # decimal 1 00:16:15.300 14:56:48 -- scripts/common.sh@352 -- # local d=1 00:16:15.300 14:56:48 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:15.300 14:56:48 -- scripts/common.sh@354 -- # echo 1 00:16:15.300 14:56:48 -- scripts/common.sh@364 -- # ver1[v]=1 00:16:15.300 14:56:48 -- scripts/common.sh@365 -- # decimal 2 00:16:15.300 14:56:48 -- scripts/common.sh@352 -- # local d=2 00:16:15.300 14:56:48 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:15.300 14:56:48 -- scripts/common.sh@354 -- # echo 2 00:16:15.300 14:56:48 -- scripts/common.sh@365 -- # ver2[v]=2 00:16:15.300 14:56:48 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:16:15.300 14:56:48 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:16:15.300 14:56:48 -- scripts/common.sh@367 -- # return 0 00:16:15.300 14:56:48 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:15.300 14:56:48 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:16:15.300 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:15.300 --rc genhtml_branch_coverage=1 00:16:15.300 --rc genhtml_function_coverage=1 00:16:15.300 --rc genhtml_legend=1 00:16:15.300 --rc geninfo_all_blocks=1 00:16:15.300 --rc geninfo_unexecuted_blocks=1 00:16:15.300 00:16:15.300 ' 00:16:15.300 14:56:48 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:16:15.300 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:15.300 --rc genhtml_branch_coverage=1 00:16:15.300 --rc genhtml_function_coverage=1 00:16:15.301 --rc genhtml_legend=1 00:16:15.301 --rc geninfo_all_blocks=1 00:16:15.301 --rc geninfo_unexecuted_blocks=1 00:16:15.301 00:16:15.301 ' 00:16:15.301 14:56:48 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:16:15.301 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:15.301 --rc genhtml_branch_coverage=1 00:16:15.301 --rc genhtml_function_coverage=1 00:16:15.301 --rc genhtml_legend=1 00:16:15.301 --rc geninfo_all_blocks=1 00:16:15.301 --rc geninfo_unexecuted_blocks=1 00:16:15.301 00:16:15.301 ' 00:16:15.301 14:56:48 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:16:15.301 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:15.301 --rc genhtml_branch_coverage=1 00:16:15.301 --rc genhtml_function_coverage=1 00:16:15.301 --rc genhtml_legend=1 00:16:15.301 --rc geninfo_all_blocks=1 00:16:15.301 --rc geninfo_unexecuted_blocks=1 00:16:15.301 00:16:15.301 ' 00:16:15.301 14:56:48 -- target/nmic.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:16:15.301 14:56:48 -- nvmf/common.sh@7 -- # uname -s 00:16:15.301 14:56:48 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:15.301 14:56:48 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:15.301 14:56:48 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:15.301 14:56:48 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:15.301 14:56:48 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:15.301 14:56:48 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:15.301 14:56:48 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:15.301 14:56:48 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:15.301 14:56:48 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:15.301 14:56:48 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:15.301 14:56:48 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:2d843004-a791-47f3-8dd7-3d04462c368b 00:16:15.301 14:56:48 -- nvmf/common.sh@18 -- # NVME_HOSTID=2d843004-a791-47f3-8dd7-3d04462c368b 00:16:15.301 14:56:48 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:15.301 14:56:48 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:15.301 14:56:48 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:16:15.301 14:56:48 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:16:15.301 14:56:48 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:15.301 14:56:48 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:15.301 14:56:48 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:15.301 14:56:48 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:15.301 14:56:48 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:15.301 14:56:48 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:15.301 14:56:48 -- paths/export.sh@5 -- # export PATH 00:16:15.301 14:56:48 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:15.301 14:56:48 -- nvmf/common.sh@46 -- # : 0 00:16:15.301 14:56:48 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:16:15.301 14:56:48 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:16:15.301 14:56:48 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:16:15.301 14:56:48 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:15.301 14:56:48 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:15.301 14:56:48 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:16:15.301 14:56:48 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:16:15.301 14:56:48 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:16:15.301 14:56:48 -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:16:15.301 14:56:48 -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:16:15.301 14:56:48 -- target/nmic.sh@14 -- # nvmftestinit 00:16:15.301 14:56:48 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:16:15.301 14:56:48 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:15.301 14:56:48 -- nvmf/common.sh@436 -- # prepare_net_devs 00:16:15.301 14:56:48 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:16:15.301 14:56:48 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:16:15.301 14:56:48 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:15.301 14:56:48 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:15.301 14:56:48 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:15.301 14:56:48 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:16:15.301 14:56:48 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:16:15.301 14:56:48 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:16:15.301 14:56:48 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:16:15.301 14:56:48 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:16:15.301 14:56:48 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:16:15.301 14:56:48 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:15.301 14:56:48 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:15.301 14:56:48 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:16:15.301 14:56:48 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:16:15.301 14:56:48 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:16:15.301 14:56:48 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:16:15.301 14:56:48 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:16:15.301 14:56:48 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:15.301 14:56:48 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:16:15.301 14:56:48 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:16:15.301 14:56:48 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:16:15.301 14:56:48 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:16:15.301 14:56:48 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:16:15.301 14:56:48 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:16:15.301 Cannot find device "nvmf_tgt_br" 00:16:15.301 14:56:48 -- nvmf/common.sh@154 -- # true 00:16:15.301 14:56:48 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:16:15.301 Cannot find device "nvmf_tgt_br2" 00:16:15.301 14:56:48 -- nvmf/common.sh@155 -- # true 00:16:15.301 14:56:48 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:16:15.301 14:56:48 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:16:15.301 Cannot find device "nvmf_tgt_br" 00:16:15.301 14:56:48 -- nvmf/common.sh@157 -- # true 00:16:15.301 14:56:48 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:16:15.301 Cannot find device "nvmf_tgt_br2" 00:16:15.301 14:56:48 -- nvmf/common.sh@158 -- # true 00:16:15.301 14:56:48 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:16:15.561 14:56:48 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:16:15.561 14:56:48 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:16:15.561 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:15.561 14:56:48 -- nvmf/common.sh@161 -- # true 00:16:15.561 14:56:48 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:16:15.561 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:15.561 14:56:48 -- nvmf/common.sh@162 -- # true 00:16:15.561 14:56:48 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:16:15.561 14:56:48 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:16:15.561 14:56:48 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:16:15.561 14:56:48 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:16:15.561 14:56:48 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:16:15.561 14:56:48 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:16:15.561 14:56:48 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:16:15.561 14:56:48 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:16:15.561 14:56:48 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:16:15.561 14:56:48 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:16:15.561 14:56:48 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:16:15.561 14:56:48 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:16:15.561 14:56:48 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:16:15.561 14:56:48 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:16:15.561 14:56:48 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:16:15.561 14:56:48 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:16:15.561 14:56:48 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:16:15.561 14:56:48 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:16:15.561 14:56:48 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:16:15.561 14:56:48 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:16:15.561 14:56:48 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:16:15.561 14:56:48 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:16:15.561 14:56:48 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:16:15.561 14:56:48 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:16:15.561 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:15.561 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.073 ms 00:16:15.561 00:16:15.561 --- 10.0.0.2 ping statistics --- 00:16:15.561 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:15.561 rtt min/avg/max/mdev = 0.073/0.073/0.073/0.000 ms 00:16:15.561 14:56:48 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:16:15.561 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:16:15.561 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.056 ms 00:16:15.561 00:16:15.561 --- 10.0.0.3 ping statistics --- 00:16:15.561 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:15.561 rtt min/avg/max/mdev = 0.056/0.056/0.056/0.000 ms 00:16:15.561 14:56:48 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:16:15.561 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:15.561 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.017 ms 00:16:15.561 00:16:15.561 --- 10.0.0.1 ping statistics --- 00:16:15.561 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:15.561 rtt min/avg/max/mdev = 0.017/0.017/0.017/0.000 ms 00:16:15.561 14:56:48 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:15.561 14:56:48 -- nvmf/common.sh@421 -- # return 0 00:16:15.561 14:56:48 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:16:15.561 14:56:48 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:15.561 14:56:48 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:16:15.561 14:56:48 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:16:15.561 14:56:48 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:15.561 14:56:48 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:16:15.561 14:56:48 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:16:15.561 14:56:48 -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:16:15.561 14:56:48 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:16:15.561 14:56:48 -- common/autotest_common.sh@722 -- # xtrace_disable 00:16:15.561 14:56:48 -- common/autotest_common.sh@10 -- # set +x 00:16:15.561 14:56:48 -- nvmf/common.sh@469 -- # nvmfpid=86717 00:16:15.561 14:56:48 -- nvmf/common.sh@470 -- # waitforlisten 86717 00:16:15.561 14:56:48 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:16:15.561 14:56:48 -- common/autotest_common.sh@829 -- # '[' -z 86717 ']' 00:16:15.561 14:56:48 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:15.561 14:56:48 -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:15.561 14:56:48 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:15.561 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:15.561 14:56:48 -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:15.561 14:56:48 -- common/autotest_common.sh@10 -- # set +x 00:16:15.820 [2024-12-01 14:56:48.699849] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:16:15.820 [2024-12-01 14:56:48.699936] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:15.820 [2024-12-01 14:56:48.840516] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:16:15.820 [2024-12-01 14:56:48.892974] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:16:15.820 [2024-12-01 14:56:48.893139] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:15.820 [2024-12-01 14:56:48.893152] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:15.820 [2024-12-01 14:56:48.893161] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:15.820 [2024-12-01 14:56:48.893315] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:16:15.820 [2024-12-01 14:56:48.893384] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:16:15.820 [2024-12-01 14:56:48.894001] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:16:15.820 [2024-12-01 14:56:48.894010] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:16.758 14:56:49 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:16.758 14:56:49 -- common/autotest_common.sh@862 -- # return 0 00:16:16.758 14:56:49 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:16:16.758 14:56:49 -- common/autotest_common.sh@728 -- # xtrace_disable 00:16:16.758 14:56:49 -- common/autotest_common.sh@10 -- # set +x 00:16:16.758 14:56:49 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:16.758 14:56:49 -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:16:16.758 14:56:49 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:16.758 14:56:49 -- common/autotest_common.sh@10 -- # set +x 00:16:16.758 [2024-12-01 14:56:49.709207] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:16.758 14:56:49 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:16.758 14:56:49 -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:16:16.758 14:56:49 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:16.758 14:56:49 -- common/autotest_common.sh@10 -- # set +x 00:16:16.758 Malloc0 00:16:16.758 14:56:49 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:16.758 14:56:49 -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:16:16.758 14:56:49 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:16.758 14:56:49 -- common/autotest_common.sh@10 -- # set +x 00:16:16.758 14:56:49 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:16.758 14:56:49 -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:16:16.758 14:56:49 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:16.758 14:56:49 -- common/autotest_common.sh@10 -- # set +x 00:16:16.758 14:56:49 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:16.758 14:56:49 -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:16.758 14:56:49 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:16.758 14:56:49 -- common/autotest_common.sh@10 -- # set +x 00:16:16.758 [2024-12-01 14:56:49.782626] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:16.758 14:56:49 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:16.758 14:56:49 -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:16:16.758 test case1: single bdev can't be used in multiple subsystems 00:16:16.758 14:56:49 -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:16:16.758 14:56:49 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:16.758 14:56:49 -- common/autotest_common.sh@10 -- # set +x 00:16:16.758 14:56:49 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:16.758 14:56:49 -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:16:16.758 14:56:49 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:16.758 14:56:49 -- common/autotest_common.sh@10 -- # set +x 00:16:16.758 14:56:49 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:16.758 14:56:49 -- target/nmic.sh@28 -- # nmic_status=0 00:16:16.758 14:56:49 -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:16:16.758 14:56:49 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:16.758 14:56:49 -- common/autotest_common.sh@10 -- # set +x 00:16:16.758 [2024-12-01 14:56:49.806467] bdev.c:7940:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:16:16.758 [2024-12-01 14:56:49.806498] subsystem.c:1819:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:16:16.758 [2024-12-01 14:56:49.806507] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:16.759 2024/12/01 14:56:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:Malloc0] nqn:nqn.2016-06.io.spdk:cnode2], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:16.759 request: 00:16:16.759 { 00:16:16.759 "method": "nvmf_subsystem_add_ns", 00:16:16.759 "params": { 00:16:16.759 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:16:16.759 "namespace": { 00:16:16.759 "bdev_name": "Malloc0" 00:16:16.759 } 00:16:16.759 } 00:16:16.759 } 00:16:16.759 Got JSON-RPC error response 00:16:16.759 GoRPCClient: error on JSON-RPC call 00:16:16.759 14:56:49 -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:16:16.759 14:56:49 -- target/nmic.sh@29 -- # nmic_status=1 00:16:16.759 14:56:49 -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:16:16.759 Adding namespace failed - expected result. 00:16:16.759 14:56:49 -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:16:16.759 test case2: host connect to nvmf target in multiple paths 00:16:16.759 14:56:49 -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:16:16.759 14:56:49 -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:16:16.759 14:56:49 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:16.759 14:56:49 -- common/autotest_common.sh@10 -- # set +x 00:16:16.759 [2024-12-01 14:56:49.818578] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:16:16.759 14:56:49 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:16.759 14:56:49 -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:2d843004-a791-47f3-8dd7-3d04462c368b --hostid=2d843004-a791-47f3-8dd7-3d04462c368b -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:16:17.018 14:56:49 -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:2d843004-a791-47f3-8dd7-3d04462c368b --hostid=2d843004-a791-47f3-8dd7-3d04462c368b -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:16:17.277 14:56:50 -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:16:17.277 14:56:50 -- common/autotest_common.sh@1187 -- # local i=0 00:16:17.277 14:56:50 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:16:17.277 14:56:50 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:16:17.277 14:56:50 -- common/autotest_common.sh@1194 -- # sleep 2 00:16:19.181 14:56:52 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:16:19.182 14:56:52 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:16:19.182 14:56:52 -- common/autotest_common.sh@1196 -- # grep -c SPDKISFASTANDAWESOME 00:16:19.182 14:56:52 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:16:19.182 14:56:52 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:16:19.182 14:56:52 -- common/autotest_common.sh@1197 -- # return 0 00:16:19.182 14:56:52 -- target/nmic.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:16:19.182 [global] 00:16:19.182 thread=1 00:16:19.182 invalidate=1 00:16:19.182 rw=write 00:16:19.182 time_based=1 00:16:19.182 runtime=1 00:16:19.182 ioengine=libaio 00:16:19.182 direct=1 00:16:19.182 bs=4096 00:16:19.182 iodepth=1 00:16:19.182 norandommap=0 00:16:19.182 numjobs=1 00:16:19.182 00:16:19.182 verify_dump=1 00:16:19.182 verify_backlog=512 00:16:19.182 verify_state_save=0 00:16:19.182 do_verify=1 00:16:19.182 verify=crc32c-intel 00:16:19.182 [job0] 00:16:19.182 filename=/dev/nvme0n1 00:16:19.182 Could not set queue depth (nvme0n1) 00:16:19.440 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:16:19.440 fio-3.35 00:16:19.440 Starting 1 thread 00:16:20.815 00:16:20.815 job0: (groupid=0, jobs=1): err= 0: pid=86827: Sun Dec 1 14:56:53 2024 00:16:20.815 read: IOPS=3463, BW=13.5MiB/s (14.2MB/s)(13.5MiB/1001msec) 00:16:20.815 slat (nsec): min=12225, max=59069, avg=15162.25, stdev=5139.81 00:16:20.815 clat (usec): min=111, max=246, avg=141.96, stdev=15.59 00:16:20.815 lat (usec): min=124, max=261, avg=157.12, stdev=16.45 00:16:20.815 clat percentiles (usec): 00:16:20.815 | 1.00th=[ 118], 5.00th=[ 123], 10.00th=[ 126], 20.00th=[ 130], 00:16:20.815 | 30.00th=[ 133], 40.00th=[ 137], 50.00th=[ 139], 60.00th=[ 143], 00:16:20.815 | 70.00th=[ 147], 80.00th=[ 153], 90.00th=[ 163], 95.00th=[ 172], 00:16:20.815 | 99.00th=[ 190], 99.50th=[ 200], 99.90th=[ 217], 99.95th=[ 223], 00:16:20.815 | 99.99th=[ 247] 00:16:20.815 write: IOPS=3580, BW=14.0MiB/s (14.7MB/s)(14.0MiB/1001msec); 0 zone resets 00:16:20.815 slat (usec): min=18, max=143, avg=23.81, stdev= 7.72 00:16:20.815 clat (usec): min=79, max=495, avg=100.24, stdev=14.32 00:16:20.815 lat (usec): min=99, max=518, avg=124.06, stdev=17.10 00:16:20.815 clat percentiles (usec): 00:16:20.815 | 1.00th=[ 83], 5.00th=[ 86], 10.00th=[ 88], 20.00th=[ 91], 00:16:20.815 | 30.00th=[ 93], 40.00th=[ 95], 50.00th=[ 97], 60.00th=[ 100], 00:16:20.815 | 70.00th=[ 103], 80.00th=[ 109], 90.00th=[ 118], 95.00th=[ 126], 00:16:20.815 | 99.00th=[ 145], 99.50th=[ 151], 99.90th=[ 167], 99.95th=[ 167], 00:16:20.815 | 99.99th=[ 494] 00:16:20.815 bw ( KiB/s): min=15688, max=15688, per=100.00%, avg=15688.00, stdev= 0.00, samples=1 00:16:20.815 iops : min= 3922, max= 3922, avg=3922.00, stdev= 0.00, samples=1 00:16:20.815 lat (usec) : 100=30.86%, 250=69.12%, 500=0.01% 00:16:20.815 cpu : usr=1.90%, sys=10.20%, ctx=7052, majf=0, minf=5 00:16:20.815 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:20.815 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:20.815 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:20.815 issued rwts: total=3467,3584,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:20.815 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:20.815 00:16:20.815 Run status group 0 (all jobs): 00:16:20.815 READ: bw=13.5MiB/s (14.2MB/s), 13.5MiB/s-13.5MiB/s (14.2MB/s-14.2MB/s), io=13.5MiB (14.2MB), run=1001-1001msec 00:16:20.815 WRITE: bw=14.0MiB/s (14.7MB/s), 14.0MiB/s-14.0MiB/s (14.7MB/s-14.7MB/s), io=14.0MiB (14.7MB), run=1001-1001msec 00:16:20.815 00:16:20.815 Disk stats (read/write): 00:16:20.815 nvme0n1: ios=3122/3254, merge=0/0, ticks=462/356, in_queue=818, util=91.28% 00:16:20.815 14:56:53 -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:16:20.815 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:16:20.815 14:56:53 -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:16:20.815 14:56:53 -- common/autotest_common.sh@1208 -- # local i=0 00:16:20.815 14:56:53 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:16:20.815 14:56:53 -- common/autotest_common.sh@1209 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:20.815 14:56:53 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:16:20.815 14:56:53 -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:20.815 14:56:53 -- common/autotest_common.sh@1220 -- # return 0 00:16:20.815 14:56:53 -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:16:20.815 14:56:53 -- target/nmic.sh@53 -- # nvmftestfini 00:16:20.815 14:56:53 -- nvmf/common.sh@476 -- # nvmfcleanup 00:16:20.815 14:56:53 -- nvmf/common.sh@116 -- # sync 00:16:20.815 14:56:53 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:16:20.815 14:56:53 -- nvmf/common.sh@119 -- # set +e 00:16:20.815 14:56:53 -- nvmf/common.sh@120 -- # for i in {1..20} 00:16:20.815 14:56:53 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:16:20.815 rmmod nvme_tcp 00:16:20.815 rmmod nvme_fabrics 00:16:20.815 rmmod nvme_keyring 00:16:20.815 14:56:53 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:16:20.815 14:56:53 -- nvmf/common.sh@123 -- # set -e 00:16:20.815 14:56:53 -- nvmf/common.sh@124 -- # return 0 00:16:20.815 14:56:53 -- nvmf/common.sh@477 -- # '[' -n 86717 ']' 00:16:20.815 14:56:53 -- nvmf/common.sh@478 -- # killprocess 86717 00:16:20.816 14:56:53 -- common/autotest_common.sh@936 -- # '[' -z 86717 ']' 00:16:20.816 14:56:53 -- common/autotest_common.sh@940 -- # kill -0 86717 00:16:20.816 14:56:53 -- common/autotest_common.sh@941 -- # uname 00:16:20.816 14:56:53 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:16:20.816 14:56:53 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 86717 00:16:20.816 14:56:53 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:16:20.816 14:56:53 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:16:20.816 14:56:53 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 86717' 00:16:20.816 killing process with pid 86717 00:16:20.816 14:56:53 -- common/autotest_common.sh@955 -- # kill 86717 00:16:20.816 14:56:53 -- common/autotest_common.sh@960 -- # wait 86717 00:16:21.075 14:56:54 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:16:21.075 14:56:54 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:16:21.075 14:56:54 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:16:21.075 14:56:54 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:21.075 14:56:54 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:16:21.075 14:56:54 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:21.075 14:56:54 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:21.075 14:56:54 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:21.075 14:56:54 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:16:21.075 00:16:21.075 real 0m6.004s 00:16:21.075 user 0m20.277s 00:16:21.075 sys 0m1.329s 00:16:21.075 14:56:54 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:16:21.075 ************************************ 00:16:21.075 END TEST nvmf_nmic 00:16:21.075 ************************************ 00:16:21.075 14:56:54 -- common/autotest_common.sh@10 -- # set +x 00:16:21.075 14:56:54 -- nvmf/nvmf.sh@54 -- # run_test nvmf_fio_target /home/vagrant/spdk_repo/spdk/test/nvmf/target/fio.sh --transport=tcp 00:16:21.075 14:56:54 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:16:21.075 14:56:54 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:16:21.075 14:56:54 -- common/autotest_common.sh@10 -- # set +x 00:16:21.075 ************************************ 00:16:21.075 START TEST nvmf_fio_target 00:16:21.075 ************************************ 00:16:21.075 14:56:54 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/fio.sh --transport=tcp 00:16:21.334 * Looking for test storage... 00:16:21.334 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:16:21.334 14:56:54 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:16:21.334 14:56:54 -- common/autotest_common.sh@1690 -- # lcov --version 00:16:21.334 14:56:54 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:16:21.334 14:56:54 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:16:21.334 14:56:54 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:16:21.334 14:56:54 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:16:21.334 14:56:54 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:16:21.334 14:56:54 -- scripts/common.sh@335 -- # IFS=.-: 00:16:21.334 14:56:54 -- scripts/common.sh@335 -- # read -ra ver1 00:16:21.334 14:56:54 -- scripts/common.sh@336 -- # IFS=.-: 00:16:21.334 14:56:54 -- scripts/common.sh@336 -- # read -ra ver2 00:16:21.334 14:56:54 -- scripts/common.sh@337 -- # local 'op=<' 00:16:21.334 14:56:54 -- scripts/common.sh@339 -- # ver1_l=2 00:16:21.334 14:56:54 -- scripts/common.sh@340 -- # ver2_l=1 00:16:21.335 14:56:54 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:16:21.335 14:56:54 -- scripts/common.sh@343 -- # case "$op" in 00:16:21.335 14:56:54 -- scripts/common.sh@344 -- # : 1 00:16:21.335 14:56:54 -- scripts/common.sh@363 -- # (( v = 0 )) 00:16:21.335 14:56:54 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:21.335 14:56:54 -- scripts/common.sh@364 -- # decimal 1 00:16:21.335 14:56:54 -- scripts/common.sh@352 -- # local d=1 00:16:21.335 14:56:54 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:21.335 14:56:54 -- scripts/common.sh@354 -- # echo 1 00:16:21.335 14:56:54 -- scripts/common.sh@364 -- # ver1[v]=1 00:16:21.335 14:56:54 -- scripts/common.sh@365 -- # decimal 2 00:16:21.335 14:56:54 -- scripts/common.sh@352 -- # local d=2 00:16:21.335 14:56:54 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:21.335 14:56:54 -- scripts/common.sh@354 -- # echo 2 00:16:21.335 14:56:54 -- scripts/common.sh@365 -- # ver2[v]=2 00:16:21.335 14:56:54 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:16:21.335 14:56:54 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:16:21.335 14:56:54 -- scripts/common.sh@367 -- # return 0 00:16:21.335 14:56:54 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:21.335 14:56:54 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:16:21.335 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:21.335 --rc genhtml_branch_coverage=1 00:16:21.335 --rc genhtml_function_coverage=1 00:16:21.335 --rc genhtml_legend=1 00:16:21.335 --rc geninfo_all_blocks=1 00:16:21.335 --rc geninfo_unexecuted_blocks=1 00:16:21.335 00:16:21.335 ' 00:16:21.335 14:56:54 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:16:21.335 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:21.335 --rc genhtml_branch_coverage=1 00:16:21.335 --rc genhtml_function_coverage=1 00:16:21.335 --rc genhtml_legend=1 00:16:21.335 --rc geninfo_all_blocks=1 00:16:21.335 --rc geninfo_unexecuted_blocks=1 00:16:21.335 00:16:21.335 ' 00:16:21.335 14:56:54 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:16:21.335 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:21.335 --rc genhtml_branch_coverage=1 00:16:21.335 --rc genhtml_function_coverage=1 00:16:21.335 --rc genhtml_legend=1 00:16:21.335 --rc geninfo_all_blocks=1 00:16:21.335 --rc geninfo_unexecuted_blocks=1 00:16:21.335 00:16:21.335 ' 00:16:21.335 14:56:54 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:16:21.335 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:21.335 --rc genhtml_branch_coverage=1 00:16:21.335 --rc genhtml_function_coverage=1 00:16:21.335 --rc genhtml_legend=1 00:16:21.335 --rc geninfo_all_blocks=1 00:16:21.335 --rc geninfo_unexecuted_blocks=1 00:16:21.335 00:16:21.335 ' 00:16:21.335 14:56:54 -- target/fio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:16:21.335 14:56:54 -- nvmf/common.sh@7 -- # uname -s 00:16:21.335 14:56:54 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:21.335 14:56:54 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:21.335 14:56:54 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:21.335 14:56:54 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:21.335 14:56:54 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:21.335 14:56:54 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:21.335 14:56:54 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:21.335 14:56:54 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:21.335 14:56:54 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:21.335 14:56:54 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:21.335 14:56:54 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:2d843004-a791-47f3-8dd7-3d04462c368b 00:16:21.335 14:56:54 -- nvmf/common.sh@18 -- # NVME_HOSTID=2d843004-a791-47f3-8dd7-3d04462c368b 00:16:21.335 14:56:54 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:21.335 14:56:54 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:21.335 14:56:54 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:16:21.335 14:56:54 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:16:21.335 14:56:54 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:21.335 14:56:54 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:21.335 14:56:54 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:21.335 14:56:54 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:21.335 14:56:54 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:21.335 14:56:54 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:21.335 14:56:54 -- paths/export.sh@5 -- # export PATH 00:16:21.335 14:56:54 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:21.335 14:56:54 -- nvmf/common.sh@46 -- # : 0 00:16:21.335 14:56:54 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:16:21.335 14:56:54 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:16:21.335 14:56:54 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:16:21.335 14:56:54 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:21.335 14:56:54 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:21.335 14:56:54 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:16:21.335 14:56:54 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:16:21.335 14:56:54 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:16:21.335 14:56:54 -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:16:21.335 14:56:54 -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:16:21.335 14:56:54 -- target/fio.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:16:21.335 14:56:54 -- target/fio.sh@16 -- # nvmftestinit 00:16:21.335 14:56:54 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:16:21.335 14:56:54 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:21.335 14:56:54 -- nvmf/common.sh@436 -- # prepare_net_devs 00:16:21.335 14:56:54 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:16:21.335 14:56:54 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:16:21.335 14:56:54 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:21.335 14:56:54 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:21.335 14:56:54 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:21.335 14:56:54 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:16:21.335 14:56:54 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:16:21.335 14:56:54 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:16:21.335 14:56:54 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:16:21.335 14:56:54 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:16:21.335 14:56:54 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:16:21.335 14:56:54 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:21.335 14:56:54 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:21.335 14:56:54 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:16:21.335 14:56:54 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:16:21.335 14:56:54 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:16:21.335 14:56:54 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:16:21.335 14:56:54 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:16:21.335 14:56:54 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:21.335 14:56:54 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:16:21.335 14:56:54 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:16:21.335 14:56:54 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:16:21.335 14:56:54 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:16:21.335 14:56:54 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:16:21.335 14:56:54 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:16:21.336 Cannot find device "nvmf_tgt_br" 00:16:21.336 14:56:54 -- nvmf/common.sh@154 -- # true 00:16:21.336 14:56:54 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:16:21.336 Cannot find device "nvmf_tgt_br2" 00:16:21.336 14:56:54 -- nvmf/common.sh@155 -- # true 00:16:21.336 14:56:54 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:16:21.336 14:56:54 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:16:21.336 Cannot find device "nvmf_tgt_br" 00:16:21.336 14:56:54 -- nvmf/common.sh@157 -- # true 00:16:21.336 14:56:54 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:16:21.336 Cannot find device "nvmf_tgt_br2" 00:16:21.336 14:56:54 -- nvmf/common.sh@158 -- # true 00:16:21.336 14:56:54 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:16:21.595 14:56:54 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:16:21.595 14:56:54 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:16:21.595 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:21.595 14:56:54 -- nvmf/common.sh@161 -- # true 00:16:21.595 14:56:54 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:16:21.595 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:21.595 14:56:54 -- nvmf/common.sh@162 -- # true 00:16:21.595 14:56:54 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:16:21.595 14:56:54 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:16:21.595 14:56:54 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:16:21.595 14:56:54 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:16:21.595 14:56:54 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:16:21.595 14:56:54 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:16:21.595 14:56:54 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:16:21.595 14:56:54 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:16:21.595 14:56:54 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:16:21.595 14:56:54 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:16:21.595 14:56:54 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:16:21.595 14:56:54 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:16:21.595 14:56:54 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:16:21.595 14:56:54 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:16:21.595 14:56:54 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:16:21.595 14:56:54 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:16:21.595 14:56:54 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:16:21.595 14:56:54 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:16:21.595 14:56:54 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:16:21.595 14:56:54 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:16:21.595 14:56:54 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:16:21.595 14:56:54 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:16:21.595 14:56:54 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:16:21.595 14:56:54 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:16:21.595 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:21.595 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.060 ms 00:16:21.595 00:16:21.595 --- 10.0.0.2 ping statistics --- 00:16:21.595 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:21.595 rtt min/avg/max/mdev = 0.060/0.060/0.060/0.000 ms 00:16:21.595 14:56:54 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:16:21.595 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:16:21.595 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.042 ms 00:16:21.595 00:16:21.595 --- 10.0.0.3 ping statistics --- 00:16:21.595 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:21.595 rtt min/avg/max/mdev = 0.042/0.042/0.042/0.000 ms 00:16:21.595 14:56:54 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:16:21.595 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:21.595 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.032 ms 00:16:21.595 00:16:21.595 --- 10.0.0.1 ping statistics --- 00:16:21.595 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:21.595 rtt min/avg/max/mdev = 0.032/0.032/0.032/0.000 ms 00:16:21.595 14:56:54 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:21.595 14:56:54 -- nvmf/common.sh@421 -- # return 0 00:16:21.595 14:56:54 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:16:21.595 14:56:54 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:21.595 14:56:54 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:16:21.595 14:56:54 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:16:21.595 14:56:54 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:21.595 14:56:54 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:16:21.595 14:56:54 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:16:21.855 14:56:54 -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:16:21.855 14:56:54 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:16:21.855 14:56:54 -- common/autotest_common.sh@722 -- # xtrace_disable 00:16:21.855 14:56:54 -- common/autotest_common.sh@10 -- # set +x 00:16:21.855 14:56:54 -- nvmf/common.sh@469 -- # nvmfpid=87012 00:16:21.855 14:56:54 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:16:21.855 14:56:54 -- nvmf/common.sh@470 -- # waitforlisten 87012 00:16:21.855 14:56:54 -- common/autotest_common.sh@829 -- # '[' -z 87012 ']' 00:16:21.855 14:56:54 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:21.855 14:56:54 -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:21.855 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:21.855 14:56:54 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:21.855 14:56:54 -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:21.855 14:56:54 -- common/autotest_common.sh@10 -- # set +x 00:16:21.855 [2024-12-01 14:56:54.773408] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:16:21.855 [2024-12-01 14:56:54.773502] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:21.855 [2024-12-01 14:56:54.912590] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:16:21.855 [2024-12-01 14:56:54.963975] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:16:21.855 [2024-12-01 14:56:54.964141] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:21.855 [2024-12-01 14:56:54.964154] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:21.855 [2024-12-01 14:56:54.964162] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:21.855 [2024-12-01 14:56:54.964736] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:16:21.855 [2024-12-01 14:56:54.964905] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:16:21.855 [2024-12-01 14:56:54.964986] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:16:21.855 [2024-12-01 14:56:54.964996] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:22.790 14:56:55 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:22.790 14:56:55 -- common/autotest_common.sh@862 -- # return 0 00:16:22.790 14:56:55 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:16:22.790 14:56:55 -- common/autotest_common.sh@728 -- # xtrace_disable 00:16:22.790 14:56:55 -- common/autotest_common.sh@10 -- # set +x 00:16:22.790 14:56:55 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:22.790 14:56:55 -- target/fio.sh@19 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:16:23.049 [2024-12-01 14:56:56.029295] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:23.049 14:56:56 -- target/fio.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:16:23.308 14:56:56 -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:16:23.308 14:56:56 -- target/fio.sh@22 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:16:23.567 14:56:56 -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:16:23.567 14:56:56 -- target/fio.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:16:24.136 14:56:56 -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:16:24.136 14:56:56 -- target/fio.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:16:24.136 14:56:57 -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:16:24.136 14:56:57 -- target/fio.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:16:24.395 14:56:57 -- target/fio.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:16:24.654 14:56:57 -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:16:24.654 14:56:57 -- target/fio.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:16:25.222 14:56:58 -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:16:25.222 14:56:58 -- target/fio.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:16:25.222 14:56:58 -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:16:25.222 14:56:58 -- target/fio.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:16:25.481 14:56:58 -- target/fio.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:16:25.740 14:56:58 -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:16:25.740 14:56:58 -- target/fio.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:16:26.000 14:56:59 -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:16:26.000 14:56:59 -- target/fio.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:16:26.259 14:56:59 -- target/fio.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:26.518 [2024-12-01 14:56:59.466194] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:26.518 14:56:59 -- target/fio.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:16:26.778 14:56:59 -- target/fio.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:16:27.037 14:56:59 -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:2d843004-a791-47f3-8dd7-3d04462c368b --hostid=2d843004-a791-47f3-8dd7-3d04462c368b -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:16:27.037 14:57:00 -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:16:27.037 14:57:00 -- common/autotest_common.sh@1187 -- # local i=0 00:16:27.037 14:57:00 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:16:27.037 14:57:00 -- common/autotest_common.sh@1189 -- # [[ -n 4 ]] 00:16:27.037 14:57:00 -- common/autotest_common.sh@1190 -- # nvme_device_counter=4 00:16:27.037 14:57:00 -- common/autotest_common.sh@1194 -- # sleep 2 00:16:29.571 14:57:02 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:16:29.571 14:57:02 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:16:29.571 14:57:02 -- common/autotest_common.sh@1196 -- # grep -c SPDKISFASTANDAWESOME 00:16:29.571 14:57:02 -- common/autotest_common.sh@1196 -- # nvme_devices=4 00:16:29.571 14:57:02 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:16:29.571 14:57:02 -- common/autotest_common.sh@1197 -- # return 0 00:16:29.571 14:57:02 -- target/fio.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:16:29.571 [global] 00:16:29.571 thread=1 00:16:29.571 invalidate=1 00:16:29.571 rw=write 00:16:29.571 time_based=1 00:16:29.571 runtime=1 00:16:29.571 ioengine=libaio 00:16:29.571 direct=1 00:16:29.571 bs=4096 00:16:29.571 iodepth=1 00:16:29.571 norandommap=0 00:16:29.571 numjobs=1 00:16:29.571 00:16:29.571 verify_dump=1 00:16:29.571 verify_backlog=512 00:16:29.571 verify_state_save=0 00:16:29.571 do_verify=1 00:16:29.571 verify=crc32c-intel 00:16:29.571 [job0] 00:16:29.571 filename=/dev/nvme0n1 00:16:29.571 [job1] 00:16:29.571 filename=/dev/nvme0n2 00:16:29.571 [job2] 00:16:29.571 filename=/dev/nvme0n3 00:16:29.571 [job3] 00:16:29.571 filename=/dev/nvme0n4 00:16:29.571 Could not set queue depth (nvme0n1) 00:16:29.571 Could not set queue depth (nvme0n2) 00:16:29.571 Could not set queue depth (nvme0n3) 00:16:29.571 Could not set queue depth (nvme0n4) 00:16:29.571 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:16:29.571 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:16:29.571 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:16:29.571 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:16:29.571 fio-3.35 00:16:29.571 Starting 4 threads 00:16:30.508 00:16:30.508 job0: (groupid=0, jobs=1): err= 0: pid=87304: Sun Dec 1 14:57:03 2024 00:16:30.508 read: IOPS=2557, BW=9.99MiB/s (10.5MB/s)(10.0MiB/1001msec) 00:16:30.508 slat (nsec): min=11269, max=72650, avg=14397.60, stdev=5020.73 00:16:30.508 clat (usec): min=130, max=463, avg=187.53, stdev=35.73 00:16:30.508 lat (usec): min=142, max=480, avg=201.93, stdev=36.67 00:16:30.508 clat percentiles (usec): 00:16:30.508 | 1.00th=[ 139], 5.00th=[ 145], 10.00th=[ 151], 20.00th=[ 157], 00:16:30.508 | 30.00th=[ 165], 40.00th=[ 174], 50.00th=[ 182], 60.00th=[ 190], 00:16:30.508 | 70.00th=[ 202], 80.00th=[ 212], 90.00th=[ 233], 95.00th=[ 249], 00:16:30.508 | 99.00th=[ 302], 99.50th=[ 343], 99.90th=[ 420], 99.95th=[ 420], 00:16:30.508 | 99.99th=[ 465] 00:16:30.508 write: IOPS=2696, BW=10.5MiB/s (11.0MB/s)(10.5MiB/1001msec); 0 zone resets 00:16:30.508 slat (usec): min=17, max=101, avg=23.39, stdev= 7.72 00:16:30.508 clat (usec): min=95, max=604, avg=152.35, stdev=38.42 00:16:30.508 lat (usec): min=113, max=665, avg=175.74, stdev=41.44 00:16:30.508 clat percentiles (usec): 00:16:30.508 | 1.00th=[ 102], 5.00th=[ 109], 10.00th=[ 113], 20.00th=[ 120], 00:16:30.508 | 30.00th=[ 127], 40.00th=[ 135], 50.00th=[ 145], 60.00th=[ 155], 00:16:30.508 | 70.00th=[ 167], 80.00th=[ 182], 90.00th=[ 204], 95.00th=[ 221], 00:16:30.508 | 99.00th=[ 262], 99.50th=[ 281], 99.90th=[ 441], 99.95th=[ 498], 00:16:30.508 | 99.99th=[ 603] 00:16:30.508 bw ( KiB/s): min=10272, max=10272, per=32.88%, avg=10272.00, stdev= 0.00, samples=1 00:16:30.508 iops : min= 2568, max= 2568, avg=2568.00, stdev= 0.00, samples=1 00:16:30.508 lat (usec) : 100=0.32%, 250=96.54%, 500=3.12%, 750=0.02% 00:16:30.508 cpu : usr=2.00%, sys=7.50%, ctx=5259, majf=0, minf=11 00:16:30.508 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:30.508 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:30.508 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:30.508 issued rwts: total=2560,2699,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:30.508 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:30.508 job1: (groupid=0, jobs=1): err= 0: pid=87305: Sun Dec 1 14:57:03 2024 00:16:30.508 read: IOPS=1282, BW=5131KiB/s (5254kB/s)(5136KiB/1001msec) 00:16:30.508 slat (nsec): min=11019, max=74910, avg=22940.22, stdev=9520.26 00:16:30.508 clat (usec): min=180, max=3443, avg=370.35, stdev=101.79 00:16:30.508 lat (usec): min=200, max=3461, avg=393.29, stdev=101.67 00:16:30.508 clat percentiles (usec): 00:16:30.508 | 1.00th=[ 249], 5.00th=[ 281], 10.00th=[ 302], 20.00th=[ 322], 00:16:30.508 | 30.00th=[ 338], 40.00th=[ 355], 50.00th=[ 367], 60.00th=[ 383], 00:16:30.508 | 70.00th=[ 396], 80.00th=[ 412], 90.00th=[ 437], 95.00th=[ 457], 00:16:30.508 | 99.00th=[ 519], 99.50th=[ 545], 99.90th=[ 644], 99.95th=[ 3458], 00:16:30.508 | 99.99th=[ 3458] 00:16:30.508 write: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec); 0 zone resets 00:16:30.508 slat (usec): min=17, max=124, avg=33.75, stdev=12.05 00:16:30.508 clat (usec): min=114, max=702, avg=283.73, stdev=59.22 00:16:30.508 lat (usec): min=139, max=729, avg=317.47, stdev=62.24 00:16:30.508 clat percentiles (usec): 00:16:30.508 | 1.00th=[ 163], 5.00th=[ 212], 10.00th=[ 225], 20.00th=[ 239], 00:16:30.508 | 30.00th=[ 253], 40.00th=[ 265], 50.00th=[ 273], 60.00th=[ 285], 00:16:30.508 | 70.00th=[ 297], 80.00th=[ 318], 90.00th=[ 371], 95.00th=[ 404], 00:16:30.508 | 99.00th=[ 457], 99.50th=[ 486], 99.90th=[ 506], 99.95th=[ 701], 00:16:30.508 | 99.99th=[ 701] 00:16:30.508 bw ( KiB/s): min= 6776, max= 6776, per=21.69%, avg=6776.00, stdev= 0.00, samples=1 00:16:30.508 iops : min= 1694, max= 1694, avg=1694.00, stdev= 0.00, samples=1 00:16:30.508 lat (usec) : 250=15.92%, 500=83.30%, 750=0.74% 00:16:30.508 lat (msec) : 4=0.04% 00:16:30.508 cpu : usr=1.70%, sys=6.20%, ctx=2822, majf=0, minf=9 00:16:30.508 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:30.508 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:30.508 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:30.508 issued rwts: total=1284,1536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:30.508 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:30.508 job2: (groupid=0, jobs=1): err= 0: pid=87306: Sun Dec 1 14:57:03 2024 00:16:30.508 read: IOPS=1689, BW=6757KiB/s (6919kB/s)(6764KiB/1001msec) 00:16:30.508 slat (nsec): min=13153, max=59811, avg=18438.77, stdev=5939.31 00:16:30.508 clat (usec): min=164, max=3447, avg=279.84, stdev=131.71 00:16:30.508 lat (usec): min=178, max=3461, avg=298.28, stdev=133.15 00:16:30.508 clat percentiles (usec): 00:16:30.508 | 1.00th=[ 174], 5.00th=[ 186], 10.00th=[ 194], 20.00th=[ 204], 00:16:30.508 | 30.00th=[ 215], 40.00th=[ 223], 50.00th=[ 235], 60.00th=[ 249], 00:16:30.508 | 70.00th=[ 297], 80.00th=[ 375], 90.00th=[ 404], 95.00th=[ 486], 00:16:30.508 | 99.00th=[ 562], 99.50th=[ 586], 99.90th=[ 1975], 99.95th=[ 3458], 00:16:30.508 | 99.99th=[ 3458] 00:16:30.508 write: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec); 0 zone resets 00:16:30.508 slat (nsec): min=15835, max=89882, avg=27649.86, stdev=8397.29 00:16:30.508 clat (usec): min=111, max=7187, avg=210.80, stdev=167.57 00:16:30.508 lat (usec): min=137, max=7206, avg=238.45, stdev=168.17 00:16:30.508 clat percentiles (usec): 00:16:30.508 | 1.00th=[ 145], 5.00th=[ 153], 10.00th=[ 159], 20.00th=[ 167], 00:16:30.508 | 30.00th=[ 176], 40.00th=[ 184], 50.00th=[ 192], 60.00th=[ 204], 00:16:30.508 | 70.00th=[ 221], 80.00th=[ 247], 90.00th=[ 277], 95.00th=[ 293], 00:16:30.508 | 99.00th=[ 359], 99.50th=[ 383], 99.90th=[ 1352], 99.95th=[ 1778], 00:16:30.508 | 99.99th=[ 7177] 00:16:30.508 bw ( KiB/s): min= 9056, max= 9056, per=28.98%, avg=9056.00, stdev= 0.00, samples=1 00:16:30.508 iops : min= 2264, max= 2264, avg=2264.00, stdev= 0.00, samples=1 00:16:30.508 lat (usec) : 250=71.70%, 500=26.13%, 750=1.95%, 1000=0.05% 00:16:30.508 lat (msec) : 2=0.11%, 4=0.03%, 10=0.03% 00:16:30.508 cpu : usr=2.00%, sys=6.40%, ctx=3740, majf=0, minf=7 00:16:30.508 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:30.508 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:30.508 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:30.508 issued rwts: total=1691,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:30.508 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:30.508 job3: (groupid=0, jobs=1): err= 0: pid=87307: Sun Dec 1 14:57:03 2024 00:16:30.508 read: IOPS=1281, BW=5127KiB/s (5250kB/s)(5132KiB/1001msec) 00:16:30.508 slat (nsec): min=14922, max=79191, avg=21207.89, stdev=6830.51 00:16:30.508 clat (usec): min=157, max=911, avg=370.29, stdev=57.70 00:16:30.508 lat (usec): min=182, max=943, avg=391.50, stdev=58.70 00:16:30.508 clat percentiles (usec): 00:16:30.508 | 1.00th=[ 210], 5.00th=[ 289], 10.00th=[ 306], 20.00th=[ 326], 00:16:30.508 | 30.00th=[ 343], 40.00th=[ 359], 50.00th=[ 371], 60.00th=[ 383], 00:16:30.508 | 70.00th=[ 396], 80.00th=[ 412], 90.00th=[ 433], 95.00th=[ 457], 00:16:30.508 | 99.00th=[ 506], 99.50th=[ 545], 99.90th=[ 914], 99.95th=[ 914], 00:16:30.508 | 99.99th=[ 914] 00:16:30.508 write: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec); 0 zone resets 00:16:30.508 slat (nsec): min=24818, max=99343, avg=39393.93, stdev=9958.80 00:16:30.508 clat (usec): min=151, max=2229, avg=279.96, stdev=80.43 00:16:30.508 lat (usec): min=198, max=2259, avg=319.36, stdev=81.63 00:16:30.508 clat percentiles (usec): 00:16:30.508 | 1.00th=[ 186], 5.00th=[ 206], 10.00th=[ 219], 20.00th=[ 233], 00:16:30.508 | 30.00th=[ 243], 40.00th=[ 255], 50.00th=[ 265], 60.00th=[ 277], 00:16:30.509 | 70.00th=[ 289], 80.00th=[ 314], 90.00th=[ 375], 95.00th=[ 412], 00:16:30.509 | 99.00th=[ 469], 99.50th=[ 502], 99.90th=[ 914], 99.95th=[ 2245], 00:16:30.509 | 99.99th=[ 2245] 00:16:30.509 bw ( KiB/s): min= 6760, max= 6760, per=21.64%, avg=6760.00, stdev= 0.00, samples=1 00:16:30.509 iops : min= 1690, max= 1690, avg=1690.00, stdev= 0.00, samples=1 00:16:30.509 lat (usec) : 250=20.57%, 500=78.54%, 750=0.74%, 1000=0.11% 00:16:30.509 lat (msec) : 4=0.04% 00:16:30.509 cpu : usr=1.50%, sys=6.90%, ctx=2823, majf=0, minf=9 00:16:30.509 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:30.509 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:30.509 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:30.509 issued rwts: total=1283,1536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:30.509 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:30.509 00:16:30.509 Run status group 0 (all jobs): 00:16:30.509 READ: bw=26.6MiB/s (27.9MB/s), 5127KiB/s-9.99MiB/s (5250kB/s-10.5MB/s), io=26.6MiB (27.9MB), run=1001-1001msec 00:16:30.509 WRITE: bw=30.5MiB/s (32.0MB/s), 6138KiB/s-10.5MiB/s (6285kB/s-11.0MB/s), io=30.5MiB (32.0MB), run=1001-1001msec 00:16:30.509 00:16:30.509 Disk stats (read/write): 00:16:30.509 nvme0n1: ios=2098/2361, merge=0/0, ticks=490/393, in_queue=883, util=92.18% 00:16:30.509 nvme0n2: ios=1068/1441, merge=0/0, ticks=439/419, in_queue=858, util=92.60% 00:16:30.509 nvme0n3: ios=1536/1764, merge=0/0, ticks=414/377, in_queue=791, util=88.58% 00:16:30.509 nvme0n4: ios=1024/1421, merge=0/0, ticks=382/418, in_queue=800, util=89.66% 00:16:30.509 14:57:03 -- target/fio.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:16:30.509 [global] 00:16:30.509 thread=1 00:16:30.509 invalidate=1 00:16:30.509 rw=randwrite 00:16:30.509 time_based=1 00:16:30.509 runtime=1 00:16:30.509 ioengine=libaio 00:16:30.509 direct=1 00:16:30.509 bs=4096 00:16:30.509 iodepth=1 00:16:30.509 norandommap=0 00:16:30.509 numjobs=1 00:16:30.509 00:16:30.509 verify_dump=1 00:16:30.509 verify_backlog=512 00:16:30.509 verify_state_save=0 00:16:30.509 do_verify=1 00:16:30.509 verify=crc32c-intel 00:16:30.509 [job0] 00:16:30.509 filename=/dev/nvme0n1 00:16:30.509 [job1] 00:16:30.509 filename=/dev/nvme0n2 00:16:30.509 [job2] 00:16:30.509 filename=/dev/nvme0n3 00:16:30.509 [job3] 00:16:30.509 filename=/dev/nvme0n4 00:16:30.509 Could not set queue depth (nvme0n1) 00:16:30.509 Could not set queue depth (nvme0n2) 00:16:30.509 Could not set queue depth (nvme0n3) 00:16:30.509 Could not set queue depth (nvme0n4) 00:16:30.767 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:16:30.767 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:16:30.767 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:16:30.767 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:16:30.767 fio-3.35 00:16:30.767 Starting 4 threads 00:16:32.148 00:16:32.148 job0: (groupid=0, jobs=1): err= 0: pid=87366: Sun Dec 1 14:57:04 2024 00:16:32.148 read: IOPS=2258, BW=9035KiB/s (9252kB/s)(9044KiB/1001msec) 00:16:32.148 slat (nsec): min=15392, max=62411, avg=19419.36, stdev=5377.00 00:16:32.148 clat (usec): min=132, max=950, avg=199.24, stdev=43.62 00:16:32.148 lat (usec): min=154, max=967, avg=218.66, stdev=43.68 00:16:32.148 clat percentiles (usec): 00:16:32.148 | 1.00th=[ 145], 5.00th=[ 155], 10.00th=[ 161], 20.00th=[ 174], 00:16:32.148 | 30.00th=[ 182], 40.00th=[ 190], 50.00th=[ 196], 60.00th=[ 202], 00:16:32.148 | 70.00th=[ 210], 80.00th=[ 221], 90.00th=[ 235], 95.00th=[ 245], 00:16:32.148 | 99.00th=[ 289], 99.50th=[ 449], 99.90th=[ 725], 99.95th=[ 857], 00:16:32.148 | 99.99th=[ 955] 00:16:32.148 write: IOPS=2557, BW=9.99MiB/s (10.5MB/s)(10.0MiB/1001msec); 0 zone resets 00:16:32.148 slat (usec): min=22, max=102, avg=29.52, stdev= 8.51 00:16:32.148 clat (usec): min=98, max=422, avg=164.23, stdev=31.41 00:16:32.148 lat (usec): min=128, max=452, avg=193.75, stdev=32.66 00:16:32.148 clat percentiles (usec): 00:16:32.148 | 1.00th=[ 112], 5.00th=[ 121], 10.00th=[ 127], 20.00th=[ 139], 00:16:32.148 | 30.00th=[ 147], 40.00th=[ 153], 50.00th=[ 161], 60.00th=[ 169], 00:16:32.148 | 70.00th=[ 178], 80.00th=[ 188], 90.00th=[ 204], 95.00th=[ 219], 00:16:32.148 | 99.00th=[ 258], 99.50th=[ 277], 99.90th=[ 388], 99.95th=[ 392], 00:16:32.148 | 99.99th=[ 424] 00:16:32.148 bw ( KiB/s): min=11256, max=11256, per=35.29%, avg=11256.00, stdev= 0.00, samples=1 00:16:32.148 iops : min= 2814, max= 2814, avg=2814.00, stdev= 0.00, samples=1 00:16:32.148 lat (usec) : 100=0.04%, 250=97.49%, 500=2.26%, 750=0.17%, 1000=0.04% 00:16:32.148 cpu : usr=1.70%, sys=9.50%, ctx=4821, majf=0, minf=9 00:16:32.148 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:32.148 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:32.148 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:32.148 issued rwts: total=2261,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:32.148 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:32.148 job1: (groupid=0, jobs=1): err= 0: pid=87367: Sun Dec 1 14:57:04 2024 00:16:32.148 read: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec) 00:16:32.148 slat (nsec): min=13235, max=71712, avg=17057.08, stdev=5333.68 00:16:32.148 clat (usec): min=157, max=794, avg=221.82, stdev=32.55 00:16:32.148 lat (usec): min=172, max=808, avg=238.88, stdev=33.49 00:16:32.148 clat percentiles (usec): 00:16:32.148 | 1.00th=[ 169], 5.00th=[ 180], 10.00th=[ 186], 20.00th=[ 198], 00:16:32.148 | 30.00th=[ 206], 40.00th=[ 212], 50.00th=[ 221], 60.00th=[ 227], 00:16:32.148 | 70.00th=[ 235], 80.00th=[ 245], 90.00th=[ 258], 95.00th=[ 269], 00:16:32.148 | 99.00th=[ 302], 99.50th=[ 322], 99.90th=[ 400], 99.95th=[ 693], 00:16:32.148 | 99.99th=[ 791] 00:16:32.148 write: IOPS=2346, BW=9387KiB/s (9612kB/s)(9396KiB/1001msec); 0 zone resets 00:16:32.148 slat (usec): min=19, max=102, avg=26.15, stdev= 7.78 00:16:32.148 clat (usec): min=115, max=2712, avg=188.04, stdev=67.19 00:16:32.148 lat (usec): min=135, max=2747, avg=214.19, stdev=68.66 00:16:32.148 clat percentiles (usec): 00:16:32.148 | 1.00th=[ 133], 5.00th=[ 145], 10.00th=[ 151], 20.00th=[ 161], 00:16:32.148 | 30.00th=[ 169], 40.00th=[ 176], 50.00th=[ 182], 60.00th=[ 190], 00:16:32.148 | 70.00th=[ 200], 80.00th=[ 210], 90.00th=[ 227], 95.00th=[ 245], 00:16:32.148 | 99.00th=[ 281], 99.50th=[ 293], 99.90th=[ 433], 99.95th=[ 1582], 00:16:32.148 | 99.99th=[ 2704] 00:16:32.148 bw ( KiB/s): min= 8248, max= 8248, per=25.86%, avg=8248.00, stdev= 0.00, samples=1 00:16:32.148 iops : min= 2062, max= 2062, avg=2062.00, stdev= 0.00, samples=1 00:16:32.148 lat (usec) : 250=91.38%, 500=8.53%, 750=0.02%, 1000=0.02% 00:16:32.148 lat (msec) : 2=0.02%, 4=0.02% 00:16:32.148 cpu : usr=1.90%, sys=6.80%, ctx=4400, majf=0, minf=3 00:16:32.148 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:32.148 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:32.148 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:32.148 issued rwts: total=2048,2349,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:32.148 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:32.148 job2: (groupid=0, jobs=1): err= 0: pid=87368: Sun Dec 1 14:57:04 2024 00:16:32.148 read: IOPS=1272, BW=5091KiB/s (5213kB/s)(5096KiB/1001msec) 00:16:32.148 slat (nsec): min=15306, max=69255, avg=19956.42, stdev=6550.14 00:16:32.148 clat (usec): min=228, max=1369, avg=368.14, stdev=54.50 00:16:32.148 lat (usec): min=286, max=1391, avg=388.09, stdev=54.93 00:16:32.148 clat percentiles (usec): 00:16:32.148 | 1.00th=[ 285], 5.00th=[ 306], 10.00th=[ 318], 20.00th=[ 334], 00:16:32.148 | 30.00th=[ 347], 40.00th=[ 355], 50.00th=[ 363], 60.00th=[ 371], 00:16:32.148 | 70.00th=[ 383], 80.00th=[ 396], 90.00th=[ 420], 95.00th=[ 441], 00:16:32.148 | 99.00th=[ 494], 99.50th=[ 545], 99.90th=[ 873], 99.95th=[ 1369], 00:16:32.148 | 99.99th=[ 1369] 00:16:32.148 write: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec); 0 zone resets 00:16:32.148 slat (nsec): min=25986, max=98820, avg=39880.16, stdev=9399.59 00:16:32.148 clat (usec): min=126, max=539, avg=284.58, stdev=57.17 00:16:32.148 lat (usec): min=159, max=575, avg=324.46, stdev=57.34 00:16:32.148 clat percentiles (usec): 00:16:32.148 | 1.00th=[ 186], 5.00th=[ 210], 10.00th=[ 223], 20.00th=[ 239], 00:16:32.148 | 30.00th=[ 251], 40.00th=[ 265], 50.00th=[ 273], 60.00th=[ 285], 00:16:32.148 | 70.00th=[ 297], 80.00th=[ 326], 90.00th=[ 379], 95.00th=[ 396], 00:16:32.148 | 99.00th=[ 433], 99.50th=[ 457], 99.90th=[ 510], 99.95th=[ 537], 00:16:32.148 | 99.99th=[ 537] 00:16:32.148 bw ( KiB/s): min= 7848, max= 7848, per=24.61%, avg=7848.00, stdev= 0.00, samples=1 00:16:32.148 iops : min= 1962, max= 1962, avg=1962.00, stdev= 0.00, samples=1 00:16:32.148 lat (usec) : 250=15.87%, 500=83.59%, 750=0.43%, 1000=0.07% 00:16:32.148 lat (msec) : 2=0.04% 00:16:32.148 cpu : usr=2.40%, sys=5.80%, ctx=2810, majf=0, minf=11 00:16:32.148 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:32.148 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:32.148 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:32.148 issued rwts: total=1274,1536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:32.148 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:32.148 job3: (groupid=0, jobs=1): err= 0: pid=87369: Sun Dec 1 14:57:04 2024 00:16:32.148 read: IOPS=1285, BW=5143KiB/s (5266kB/s)(5148KiB/1001msec) 00:16:32.148 slat (nsec): min=17018, max=77315, avg=27855.31, stdev=8316.87 00:16:32.148 clat (usec): min=202, max=537, avg=355.78, stdev=42.40 00:16:32.148 lat (usec): min=229, max=575, avg=383.63, stdev=42.28 00:16:32.148 clat percentiles (usec): 00:16:32.148 | 1.00th=[ 258], 5.00th=[ 289], 10.00th=[ 306], 20.00th=[ 322], 00:16:32.148 | 30.00th=[ 334], 40.00th=[ 347], 50.00th=[ 355], 60.00th=[ 363], 00:16:32.148 | 70.00th=[ 375], 80.00th=[ 388], 90.00th=[ 412], 95.00th=[ 429], 00:16:32.148 | 99.00th=[ 469], 99.50th=[ 486], 99.90th=[ 515], 99.95th=[ 537], 00:16:32.148 | 99.99th=[ 537] 00:16:32.148 write: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec); 0 zone resets 00:16:32.148 slat (nsec): min=27342, max=95262, avg=39114.48, stdev=8692.48 00:16:32.148 clat (usec): min=149, max=493, avg=285.06, stdev=54.92 00:16:32.148 lat (usec): min=190, max=527, avg=324.17, stdev=55.25 00:16:32.148 clat percentiles (usec): 00:16:32.148 | 1.00th=[ 186], 5.00th=[ 210], 10.00th=[ 225], 20.00th=[ 241], 00:16:32.148 | 30.00th=[ 255], 40.00th=[ 265], 50.00th=[ 277], 60.00th=[ 289], 00:16:32.148 | 70.00th=[ 302], 80.00th=[ 326], 90.00th=[ 371], 95.00th=[ 396], 00:16:32.148 | 99.00th=[ 437], 99.50th=[ 457], 99.90th=[ 490], 99.95th=[ 494], 00:16:32.148 | 99.99th=[ 494] 00:16:32.149 bw ( KiB/s): min= 7872, max= 7872, per=24.68%, avg=7872.00, stdev= 0.00, samples=1 00:16:32.149 iops : min= 1968, max= 1968, avg=1968.00, stdev= 0.00, samples=1 00:16:32.149 lat (usec) : 250=14.95%, 500=84.95%, 750=0.11% 00:16:32.149 cpu : usr=2.30%, sys=6.70%, ctx=2823, majf=0, minf=23 00:16:32.149 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:32.149 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:32.149 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:32.149 issued rwts: total=1287,1536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:32.149 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:32.149 00:16:32.149 Run status group 0 (all jobs): 00:16:32.149 READ: bw=26.8MiB/s (28.1MB/s), 5091KiB/s-9035KiB/s (5213kB/s-9252kB/s), io=26.8MiB (28.1MB), run=1001-1001msec 00:16:32.149 WRITE: bw=31.1MiB/s (32.7MB/s), 6138KiB/s-9.99MiB/s (6285kB/s-10.5MB/s), io=31.2MiB (32.7MB), run=1001-1001msec 00:16:32.149 00:16:32.149 Disk stats (read/write): 00:16:32.149 nvme0n1: ios=2098/2129, merge=0/0, ticks=454/380, in_queue=834, util=88.98% 00:16:32.149 nvme0n2: ios=1786/2048, merge=0/0, ticks=422/411, in_queue=833, util=89.80% 00:16:32.149 nvme0n3: ios=1024/1446, merge=0/0, ticks=384/422, in_queue=806, util=89.41% 00:16:32.149 nvme0n4: ios=1024/1458, merge=0/0, ticks=368/440, in_queue=808, util=89.78% 00:16:32.149 14:57:04 -- target/fio.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:16:32.149 [global] 00:16:32.149 thread=1 00:16:32.149 invalidate=1 00:16:32.149 rw=write 00:16:32.149 time_based=1 00:16:32.149 runtime=1 00:16:32.149 ioengine=libaio 00:16:32.149 direct=1 00:16:32.149 bs=4096 00:16:32.149 iodepth=128 00:16:32.149 norandommap=0 00:16:32.149 numjobs=1 00:16:32.149 00:16:32.149 verify_dump=1 00:16:32.149 verify_backlog=512 00:16:32.149 verify_state_save=0 00:16:32.149 do_verify=1 00:16:32.149 verify=crc32c-intel 00:16:32.149 [job0] 00:16:32.149 filename=/dev/nvme0n1 00:16:32.149 [job1] 00:16:32.149 filename=/dev/nvme0n2 00:16:32.149 [job2] 00:16:32.149 filename=/dev/nvme0n3 00:16:32.149 [job3] 00:16:32.149 filename=/dev/nvme0n4 00:16:32.149 Could not set queue depth (nvme0n1) 00:16:32.149 Could not set queue depth (nvme0n2) 00:16:32.149 Could not set queue depth (nvme0n3) 00:16:32.149 Could not set queue depth (nvme0n4) 00:16:32.149 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:16:32.149 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:16:32.149 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:16:32.149 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:16:32.149 fio-3.35 00:16:32.149 Starting 4 threads 00:16:33.527 00:16:33.527 job0: (groupid=0, jobs=1): err= 0: pid=87424: Sun Dec 1 14:57:06 2024 00:16:33.527 read: IOPS=2045, BW=8183KiB/s (8380kB/s)(8208KiB/1003msec) 00:16:33.527 slat (usec): min=3, max=10096, avg=221.53, stdev=918.67 00:16:33.527 clat (usec): min=406, max=43128, avg=27715.57, stdev=5385.77 00:16:33.527 lat (usec): min=6497, max=43146, avg=27937.10, stdev=5375.22 00:16:33.527 clat percentiles (usec): 00:16:33.527 | 1.00th=[11600], 5.00th=[17171], 10.00th=[21103], 20.00th=[24511], 00:16:33.527 | 30.00th=[25822], 40.00th=[26870], 50.00th=[27919], 60.00th=[29230], 00:16:33.527 | 70.00th=[30278], 80.00th=[31589], 90.00th=[34341], 95.00th=[35390], 00:16:33.527 | 99.00th=[40633], 99.50th=[41157], 99.90th=[42206], 99.95th=[43254], 00:16:33.527 | 99.99th=[43254] 00:16:33.527 write: IOPS=2552, BW=9.97MiB/s (10.5MB/s)(10.0MiB/1003msec); 0 zone resets 00:16:33.527 slat (usec): min=11, max=12595, avg=205.20, stdev=891.78 00:16:33.527 clat (usec): min=6568, max=42344, avg=27224.16, stdev=6241.84 00:16:33.527 lat (usec): min=6608, max=42381, avg=27429.36, stdev=6259.00 00:16:33.527 clat percentiles (usec): 00:16:33.527 | 1.00th=[ 7439], 5.00th=[16319], 10.00th=[19530], 20.00th=[21103], 00:16:33.527 | 30.00th=[23462], 40.00th=[26084], 50.00th=[27919], 60.00th=[30016], 00:16:33.527 | 70.00th=[31589], 80.00th=[32637], 90.00th=[34341], 95.00th=[35390], 00:16:33.527 | 99.00th=[39060], 99.50th=[40633], 99.90th=[42206], 99.95th=[42206], 00:16:33.527 | 99.99th=[42206] 00:16:33.527 bw ( KiB/s): min= 9136, max=10352, per=18.97%, avg=9744.00, stdev=859.84, samples=2 00:16:33.527 iops : min= 2284, max= 2588, avg=2436.00, stdev=214.96, samples=2 00:16:33.527 lat (usec) : 500=0.02% 00:16:33.527 lat (msec) : 10=0.69%, 20=11.47%, 50=87.81% 00:16:33.527 cpu : usr=2.79%, sys=8.18%, ctx=641, majf=0, minf=10 00:16:33.527 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.6% 00:16:33.527 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:33.527 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:16:33.527 issued rwts: total=2052,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:33.527 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:33.527 job1: (groupid=0, jobs=1): err= 0: pid=87425: Sun Dec 1 14:57:06 2024 00:16:33.527 read: IOPS=3576, BW=14.0MiB/s (14.7MB/s)(14.0MiB/1002msec) 00:16:33.527 slat (usec): min=5, max=4614, avg=124.34, stdev=588.87 00:16:33.527 clat (usec): min=11950, max=22257, avg=16303.10, stdev=1193.49 00:16:33.527 lat (usec): min=12778, max=22308, avg=16427.44, stdev=1080.51 00:16:33.527 clat percentiles (usec): 00:16:33.527 | 1.00th=[12780], 5.00th=[13698], 10.00th=[14615], 20.00th=[15664], 00:16:33.527 | 30.00th=[15926], 40.00th=[16188], 50.00th=[16319], 60.00th=[16581], 00:16:33.527 | 70.00th=[16909], 80.00th=[17171], 90.00th=[17695], 95.00th=[18220], 00:16:33.527 | 99.00th=[18482], 99.50th=[19006], 99.90th=[19530], 99.95th=[19530], 00:16:33.527 | 99.99th=[22152] 00:16:33.527 write: IOPS=3951, BW=15.4MiB/s (16.2MB/s)(15.5MiB/1002msec); 0 zone resets 00:16:33.527 slat (usec): min=11, max=5517, avg=132.64, stdev=563.55 00:16:33.527 clat (usec): min=406, max=23729, avg=17137.87, stdev=2333.08 00:16:33.527 lat (usec): min=4315, max=23763, avg=17270.51, stdev=2320.05 00:16:33.527 clat percentiles (usec): 00:16:33.527 | 1.00th=[ 9634], 5.00th=[14222], 10.00th=[14615], 20.00th=[15401], 00:16:33.527 | 30.00th=[16057], 40.00th=[16909], 50.00th=[17171], 60.00th=[17695], 00:16:33.528 | 70.00th=[18482], 80.00th=[19006], 90.00th=[19792], 95.00th=[20317], 00:16:33.528 | 99.00th=[21365], 99.50th=[22676], 99.90th=[23725], 99.95th=[23725], 00:16:33.528 | 99.99th=[23725] 00:16:33.528 bw ( KiB/s): min=14264, max=16384, per=29.84%, avg=15324.00, stdev=1499.07, samples=2 00:16:33.528 iops : min= 3566, max= 4096, avg=3831.00, stdev=374.77, samples=2 00:16:33.528 lat (usec) : 500=0.01% 00:16:33.528 lat (msec) : 10=0.70%, 20=95.47%, 50=3.82% 00:16:33.528 cpu : usr=3.60%, sys=10.99%, ctx=540, majf=0, minf=13 00:16:33.528 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:16:33.528 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:33.528 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:16:33.528 issued rwts: total=3584,3959,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:33.528 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:33.528 job2: (groupid=0, jobs=1): err= 0: pid=87426: Sun Dec 1 14:57:06 2024 00:16:33.528 read: IOPS=3573, BW=14.0MiB/s (14.6MB/s)(14.0MiB/1003msec) 00:16:33.528 slat (usec): min=9, max=5684, avg=123.54, stdev=647.47 00:16:33.528 clat (usec): min=11296, max=24716, avg=16282.30, stdev=1599.98 00:16:33.528 lat (usec): min=11311, max=24800, avg=16405.84, stdev=1626.42 00:16:33.528 clat percentiles (usec): 00:16:33.528 | 1.00th=[11994], 5.00th=[13435], 10.00th=[14746], 20.00th=[15401], 00:16:33.528 | 30.00th=[15664], 40.00th=[16057], 50.00th=[16188], 60.00th=[16450], 00:16:33.528 | 70.00th=[16909], 80.00th=[17171], 90.00th=[17695], 95.00th=[19268], 00:16:33.528 | 99.00th=[21365], 99.50th=[21890], 99.90th=[21890], 99.95th=[22152], 00:16:33.528 | 99.99th=[24773] 00:16:33.528 write: IOPS=3994, BW=15.6MiB/s (16.4MB/s)(15.6MiB/1003msec); 0 zone resets 00:16:33.528 slat (usec): min=11, max=7307, avg=131.80, stdev=711.31 00:16:33.528 clat (usec): min=239, max=25439, avg=16972.94, stdev=3080.63 00:16:33.528 lat (usec): min=4170, max=25498, avg=17104.74, stdev=3048.08 00:16:33.528 clat percentiles (usec): 00:16:33.528 | 1.00th=[ 5407], 5.00th=[12387], 10.00th=[13042], 20.00th=[13698], 00:16:33.528 | 30.00th=[16581], 40.00th=[17171], 50.00th=[17433], 60.00th=[17695], 00:16:33.528 | 70.00th=[18220], 80.00th=[18482], 90.00th=[21103], 95.00th=[22414], 00:16:33.528 | 99.00th=[23725], 99.50th=[23987], 99.90th=[23987], 99.95th=[23987], 00:16:33.528 | 99.99th=[25560] 00:16:33.528 bw ( KiB/s): min=14776, max=16280, per=30.24%, avg=15528.00, stdev=1063.49, samples=2 00:16:33.528 iops : min= 3694, max= 4070, avg=3882.00, stdev=265.87, samples=2 00:16:33.528 lat (usec) : 250=0.01% 00:16:33.528 lat (msec) : 10=0.61%, 20=92.24%, 50=7.14% 00:16:33.528 cpu : usr=3.19%, sys=11.98%, ctx=367, majf=0, minf=9 00:16:33.528 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:16:33.528 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:33.528 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:16:33.528 issued rwts: total=3584,4006,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:33.528 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:33.528 job3: (groupid=0, jobs=1): err= 0: pid=87427: Sun Dec 1 14:57:06 2024 00:16:33.528 read: IOPS=2041, BW=8167KiB/s (8364kB/s)(8192KiB/1003msec) 00:16:33.528 slat (usec): min=4, max=10730, avg=230.37, stdev=971.68 00:16:33.528 clat (usec): min=16357, max=42511, avg=30275.47, stdev=4649.00 00:16:33.528 lat (usec): min=17065, max=44012, avg=30505.84, stdev=4617.55 00:16:33.528 clat percentiles (usec): 00:16:33.528 | 1.00th=[21103], 5.00th=[21890], 10.00th=[24511], 20.00th=[26870], 00:16:33.528 | 30.00th=[27919], 40.00th=[28967], 50.00th=[29754], 60.00th=[30802], 00:16:33.528 | 70.00th=[32113], 80.00th=[35390], 90.00th=[36439], 95.00th=[38011], 00:16:33.528 | 99.00th=[41157], 99.50th=[41681], 99.90th=[42730], 99.95th=[42730], 00:16:33.528 | 99.99th=[42730] 00:16:33.528 write: IOPS=2344, BW=9380KiB/s (9605kB/s)(9408KiB/1003msec); 0 zone resets 00:16:33.528 slat (usec): min=11, max=7840, avg=215.83, stdev=847.06 00:16:33.528 clat (usec): min=388, max=37122, avg=27283.82, stdev=5579.32 00:16:33.528 lat (usec): min=6191, max=37152, avg=27499.65, stdev=5574.97 00:16:33.528 clat percentiles (usec): 00:16:33.528 | 1.00th=[ 8094], 5.00th=[17695], 10.00th=[20317], 20.00th=[22676], 00:16:33.528 | 30.00th=[24773], 40.00th=[26346], 50.00th=[27657], 60.00th=[29492], 00:16:33.528 | 70.00th=[31065], 80.00th=[32375], 90.00th=[33817], 95.00th=[34341], 00:16:33.528 | 99.00th=[35914], 99.50th=[36439], 99.90th=[36963], 99.95th=[36963], 00:16:33.528 | 99.99th=[36963] 00:16:33.528 bw ( KiB/s): min= 8849, max= 8960, per=17.34%, avg=8904.50, stdev=78.49, samples=2 00:16:33.528 iops : min= 2212, max= 2240, avg=2226.00, stdev=19.80, samples=2 00:16:33.528 lat (usec) : 500=0.02% 00:16:33.528 lat (msec) : 10=0.84%, 20=3.93%, 50=95.20% 00:16:33.528 cpu : usr=2.40%, sys=8.68%, ctx=725, majf=0, minf=13 00:16:33.528 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.7%, >=64=98.6% 00:16:33.528 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:33.528 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:16:33.528 issued rwts: total=2048,2352,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:33.528 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:33.528 00:16:33.528 Run status group 0 (all jobs): 00:16:33.528 READ: bw=43.9MiB/s (46.0MB/s), 8167KiB/s-14.0MiB/s (8364kB/s-14.7MB/s), io=44.0MiB (46.2MB), run=1002-1003msec 00:16:33.528 WRITE: bw=50.1MiB/s (52.6MB/s), 9380KiB/s-15.6MiB/s (9605kB/s-16.4MB/s), io=50.3MiB (52.7MB), run=1002-1003msec 00:16:33.528 00:16:33.528 Disk stats (read/write): 00:16:33.528 nvme0n1: ios=1930/2048, merge=0/0, ticks=13974/14562, in_queue=28536, util=87.88% 00:16:33.528 nvme0n2: ios=3120/3362, merge=0/0, ticks=12074/13192, in_queue=25266, util=88.47% 00:16:33.528 nvme0n3: ios=3072/3387, merge=0/0, ticks=15427/16954, in_queue=32381, util=89.18% 00:16:33.528 nvme0n4: ios=1763/2048, merge=0/0, ticks=12211/12770, in_queue=24981, util=89.23% 00:16:33.528 14:57:06 -- target/fio.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:16:33.528 [global] 00:16:33.528 thread=1 00:16:33.528 invalidate=1 00:16:33.528 rw=randwrite 00:16:33.528 time_based=1 00:16:33.528 runtime=1 00:16:33.528 ioengine=libaio 00:16:33.528 direct=1 00:16:33.528 bs=4096 00:16:33.528 iodepth=128 00:16:33.528 norandommap=0 00:16:33.528 numjobs=1 00:16:33.528 00:16:33.528 verify_dump=1 00:16:33.528 verify_backlog=512 00:16:33.528 verify_state_save=0 00:16:33.528 do_verify=1 00:16:33.528 verify=crc32c-intel 00:16:33.528 [job0] 00:16:33.528 filename=/dev/nvme0n1 00:16:33.528 [job1] 00:16:33.528 filename=/dev/nvme0n2 00:16:33.528 [job2] 00:16:33.528 filename=/dev/nvme0n3 00:16:33.528 [job3] 00:16:33.528 filename=/dev/nvme0n4 00:16:33.528 Could not set queue depth (nvme0n1) 00:16:33.528 Could not set queue depth (nvme0n2) 00:16:33.528 Could not set queue depth (nvme0n3) 00:16:33.528 Could not set queue depth (nvme0n4) 00:16:33.528 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:16:33.528 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:16:33.528 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:16:33.528 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:16:33.528 fio-3.35 00:16:33.528 Starting 4 threads 00:16:34.907 00:16:34.907 job0: (groupid=0, jobs=1): err= 0: pid=87490: Sun Dec 1 14:57:07 2024 00:16:34.907 read: IOPS=4415, BW=17.2MiB/s (18.1MB/s)(17.3MiB/1005msec) 00:16:34.907 slat (usec): min=10, max=6886, avg=106.04, stdev=569.62 00:16:34.907 clat (usec): min=862, max=20950, avg=13722.50, stdev=1763.78 00:16:34.907 lat (usec): min=6836, max=20992, avg=13828.54, stdev=1805.35 00:16:34.907 clat percentiles (usec): 00:16:34.907 | 1.00th=[ 8029], 5.00th=[10945], 10.00th=[12125], 20.00th=[12780], 00:16:34.907 | 30.00th=[13042], 40.00th=[13435], 50.00th=[13698], 60.00th=[13960], 00:16:34.907 | 70.00th=[14353], 80.00th=[14615], 90.00th=[15533], 95.00th=[16712], 00:16:34.907 | 99.00th=[19006], 99.50th=[19792], 99.90th=[20841], 99.95th=[20841], 00:16:34.907 | 99.99th=[20841] 00:16:34.907 write: IOPS=4585, BW=17.9MiB/s (18.8MB/s)(18.0MiB/1005msec); 0 zone resets 00:16:34.907 slat (usec): min=14, max=7084, avg=106.11, stdev=561.87 00:16:34.907 clat (usec): min=8003, max=21624, avg=14332.72, stdev=1836.60 00:16:34.907 lat (usec): min=8033, max=21658, avg=14438.83, stdev=1823.21 00:16:34.907 clat percentiles (usec): 00:16:34.907 | 1.00th=[ 8848], 5.00th=[10290], 10.00th=[12256], 20.00th=[13304], 00:16:34.907 | 30.00th=[13698], 40.00th=[14353], 50.00th=[14615], 60.00th=[14746], 00:16:34.907 | 70.00th=[15139], 80.00th=[15664], 90.00th=[16319], 95.00th=[16712], 00:16:34.907 | 99.00th=[19268], 99.50th=[20055], 99.90th=[21627], 99.95th=[21627], 00:16:34.907 | 99.99th=[21627] 00:16:34.907 bw ( KiB/s): min=18336, max=18528, per=36.96%, avg=18432.00, stdev=135.76, samples=2 00:16:34.907 iops : min= 4584, max= 4632, avg=4608.00, stdev=33.94, samples=2 00:16:34.907 lat (usec) : 1000=0.01% 00:16:34.907 lat (msec) : 10=3.58%, 20=95.98%, 50=0.43% 00:16:34.907 cpu : usr=3.59%, sys=16.93%, ctx=378, majf=0, minf=1 00:16:34.907 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:16:34.907 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:34.907 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:16:34.907 issued rwts: total=4438,4608,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:34.907 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:34.907 job1: (groupid=0, jobs=1): err= 0: pid=87491: Sun Dec 1 14:57:07 2024 00:16:34.907 read: IOPS=2039, BW=8159KiB/s (8355kB/s)(8192KiB/1004msec) 00:16:34.907 slat (usec): min=3, max=8438, avg=227.13, stdev=922.01 00:16:34.908 clat (usec): min=21548, max=38198, avg=29410.86, stdev=2309.71 00:16:34.908 lat (usec): min=21572, max=38215, avg=29637.99, stdev=2407.28 00:16:34.908 clat percentiles (usec): 00:16:34.908 | 1.00th=[23987], 5.00th=[25822], 10.00th=[27132], 20.00th=[27919], 00:16:34.908 | 30.00th=[28443], 40.00th=[28705], 50.00th=[28967], 60.00th=[29230], 00:16:34.908 | 70.00th=[30016], 80.00th=[31327], 90.00th=[32637], 95.00th=[33817], 00:16:34.908 | 99.00th=[35914], 99.50th=[36439], 99.90th=[37487], 99.95th=[37487], 00:16:34.908 | 99.99th=[38011] 00:16:34.908 write: IOPS=2222, BW=8888KiB/s (9102kB/s)(8924KiB/1004msec); 0 zone resets 00:16:34.908 slat (usec): min=4, max=10148, avg=233.60, stdev=1112.77 00:16:34.908 clat (usec): min=290, max=42450, avg=29518.68, stdev=4984.26 00:16:34.908 lat (usec): min=4755, max=42490, avg=29752.28, stdev=5073.98 00:16:34.908 clat percentiles (usec): 00:16:34.908 | 1.00th=[ 5342], 5.00th=[23462], 10.00th=[25560], 20.00th=[27395], 00:16:34.908 | 30.00th=[28967], 40.00th=[29492], 50.00th=[30278], 60.00th=[30802], 00:16:34.908 | 70.00th=[31589], 80.00th=[32113], 90.00th=[34341], 95.00th=[35914], 00:16:34.908 | 99.00th=[39060], 99.50th=[39584], 99.90th=[41157], 99.95th=[41681], 00:16:34.908 | 99.99th=[42206] 00:16:34.908 bw ( KiB/s): min= 8192, max= 8632, per=16.87%, avg=8412.00, stdev=311.13, samples=2 00:16:34.908 iops : min= 2048, max= 2158, avg=2103.00, stdev=77.78, samples=2 00:16:34.908 lat (usec) : 500=0.02% 00:16:34.908 lat (msec) : 10=0.98%, 20=0.98%, 50=98.01% 00:16:34.908 cpu : usr=2.79%, sys=5.68%, ctx=562, majf=0, minf=2 00:16:34.908 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.7%, >=64=98.5% 00:16:34.908 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:34.908 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:16:34.908 issued rwts: total=2048,2231,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:34.908 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:34.908 job2: (groupid=0, jobs=1): err= 0: pid=87492: Sun Dec 1 14:57:07 2024 00:16:34.908 read: IOPS=3169, BW=12.4MiB/s (13.0MB/s)(12.5MiB/1009msec) 00:16:34.908 slat (usec): min=4, max=17038, avg=162.22, stdev=1059.36 00:16:34.908 clat (usec): min=3626, max=36684, avg=19844.98, stdev=5225.78 00:16:34.908 lat (usec): min=6340, max=36698, avg=20007.20, stdev=5269.04 00:16:34.908 clat percentiles (usec): 00:16:34.908 | 1.00th=[ 7767], 5.00th=[14615], 10.00th=[15270], 20.00th=[16188], 00:16:34.908 | 30.00th=[17171], 40.00th=[17433], 50.00th=[18220], 60.00th=[18744], 00:16:34.908 | 70.00th=[21627], 80.00th=[22938], 90.00th=[27919], 95.00th=[31589], 00:16:34.908 | 99.00th=[34866], 99.50th=[35390], 99.90th=[36439], 99.95th=[36439], 00:16:34.908 | 99.99th=[36439] 00:16:34.908 write: IOPS=3552, BW=13.9MiB/s (14.5MB/s)(14.0MiB/1009msec); 0 zone resets 00:16:34.908 slat (usec): min=5, max=15612, avg=127.67, stdev=590.49 00:16:34.908 clat (usec): min=3557, max=36649, avg=17945.08, stdev=4108.07 00:16:34.908 lat (usec): min=3586, max=36672, avg=18072.76, stdev=4156.66 00:16:34.908 clat percentiles (usec): 00:16:34.908 | 1.00th=[ 6390], 5.00th=[ 7963], 10.00th=[10159], 20.00th=[16450], 00:16:34.908 | 30.00th=[18220], 40.00th=[18744], 50.00th=[19268], 60.00th=[19792], 00:16:34.908 | 70.00th=[20317], 80.00th=[20579], 90.00th=[21103], 95.00th=[21365], 00:16:34.908 | 99.00th=[21890], 99.50th=[21890], 99.90th=[35390], 99.95th=[36439], 00:16:34.908 | 99.99th=[36439] 00:16:34.908 bw ( KiB/s): min=14016, max=14669, per=28.76%, avg=14342.50, stdev=461.74, samples=2 00:16:34.908 iops : min= 3504, max= 3667, avg=3585.50, stdev=115.26, samples=2 00:16:34.908 lat (msec) : 4=0.09%, 10=5.71%, 20=59.85%, 50=34.36% 00:16:34.908 cpu : usr=4.96%, sys=6.75%, ctx=508, majf=0, minf=4 00:16:34.908 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.1% 00:16:34.908 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:34.908 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:16:34.908 issued rwts: total=3198,3584,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:34.908 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:34.908 job3: (groupid=0, jobs=1): err= 0: pid=87493: Sun Dec 1 14:57:07 2024 00:16:34.908 read: IOPS=2035, BW=8143KiB/s (8339kB/s)(8192KiB/1006msec) 00:16:34.908 slat (usec): min=4, max=9221, avg=240.68, stdev=970.40 00:16:34.908 clat (usec): min=22078, max=42657, avg=29846.58, stdev=3240.74 00:16:34.908 lat (usec): min=22351, max=42670, avg=30087.26, stdev=3310.93 00:16:34.908 clat percentiles (usec): 00:16:34.908 | 1.00th=[22938], 5.00th=[25560], 10.00th=[26608], 20.00th=[27657], 00:16:34.908 | 30.00th=[28443], 40.00th=[28705], 50.00th=[29230], 60.00th=[29754], 00:16:34.908 | 70.00th=[30278], 80.00th=[31851], 90.00th=[34341], 95.00th=[36439], 00:16:34.908 | 99.00th=[39584], 99.50th=[41157], 99.90th=[42730], 99.95th=[42730], 00:16:34.908 | 99.99th=[42730] 00:16:34.908 write: IOPS=2145, BW=8581KiB/s (8786kB/s)(8632KiB/1006msec); 0 zone resets 00:16:34.908 slat (usec): min=7, max=11262, avg=226.80, stdev=1100.74 00:16:34.908 clat (usec): min=5583, max=44671, avg=30367.69, stdev=4265.00 00:16:34.908 lat (usec): min=13228, max=44689, avg=30594.50, stdev=4372.32 00:16:34.908 clat percentiles (usec): 00:16:34.908 | 1.00th=[13829], 5.00th=[24249], 10.00th=[26608], 20.00th=[28181], 00:16:34.908 | 30.00th=[29230], 40.00th=[29754], 50.00th=[30278], 60.00th=[30802], 00:16:34.908 | 70.00th=[31589], 80.00th=[33162], 90.00th=[34341], 95.00th=[36439], 00:16:34.908 | 99.00th=[42206], 99.50th=[42730], 99.90th=[43779], 99.95th=[44303], 00:16:34.908 | 99.99th=[44827] 00:16:34.908 bw ( KiB/s): min= 8200, max= 8208, per=16.45%, avg=8204.00, stdev= 5.66, samples=2 00:16:34.908 iops : min= 2050, max= 2052, avg=2051.00, stdev= 1.41, samples=2 00:16:34.908 lat (msec) : 10=0.02%, 20=1.00%, 50=98.98% 00:16:34.908 cpu : usr=1.79%, sys=6.97%, ctx=572, majf=0, minf=3 00:16:34.908 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.5% 00:16:34.908 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:34.908 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:16:34.908 issued rwts: total=2048,2158,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:34.908 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:34.908 00:16:34.908 Run status group 0 (all jobs): 00:16:34.908 READ: bw=45.4MiB/s (47.6MB/s), 8143KiB/s-17.2MiB/s (8339kB/s-18.1MB/s), io=45.8MiB (48.1MB), run=1004-1009msec 00:16:34.908 WRITE: bw=48.7MiB/s (51.1MB/s), 8581KiB/s-17.9MiB/s (8786kB/s-18.8MB/s), io=49.1MiB (51.5MB), run=1004-1009msec 00:16:34.908 00:16:34.908 Disk stats (read/write): 00:16:34.908 nvme0n1: ios=3692/4096, merge=0/0, ticks=23092/25146, in_queue=48238, util=88.88% 00:16:34.908 nvme0n2: ios=1627/2048, merge=0/0, ticks=14698/18924, in_queue=33622, util=89.08% 00:16:34.908 nvme0n3: ios=2685/3072, merge=0/0, ticks=51190/53507, in_queue=104697, util=88.88% 00:16:34.908 nvme0n4: ios=1580/2048, merge=0/0, ticks=14962/18661, in_queue=33623, util=89.10% 00:16:34.908 14:57:07 -- target/fio.sh@55 -- # sync 00:16:34.908 14:57:07 -- target/fio.sh@59 -- # fio_pid=87506 00:16:34.908 14:57:07 -- target/fio.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:16:34.908 14:57:07 -- target/fio.sh@61 -- # sleep 3 00:16:34.908 [global] 00:16:34.908 thread=1 00:16:34.908 invalidate=1 00:16:34.908 rw=read 00:16:34.908 time_based=1 00:16:34.908 runtime=10 00:16:34.908 ioengine=libaio 00:16:34.908 direct=1 00:16:34.908 bs=4096 00:16:34.908 iodepth=1 00:16:34.908 norandommap=1 00:16:34.908 numjobs=1 00:16:34.908 00:16:34.908 [job0] 00:16:34.908 filename=/dev/nvme0n1 00:16:34.908 [job1] 00:16:34.908 filename=/dev/nvme0n2 00:16:34.908 [job2] 00:16:34.908 filename=/dev/nvme0n3 00:16:34.908 [job3] 00:16:34.908 filename=/dev/nvme0n4 00:16:34.908 Could not set queue depth (nvme0n1) 00:16:34.908 Could not set queue depth (nvme0n2) 00:16:34.908 Could not set queue depth (nvme0n3) 00:16:34.908 Could not set queue depth (nvme0n4) 00:16:34.908 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:16:34.908 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:16:34.908 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:16:34.908 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:16:34.908 fio-3.35 00:16:34.908 Starting 4 threads 00:16:38.195 14:57:10 -- target/fio.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_delete concat0 00:16:38.195 fio: io_u error on file /dev/nvme0n4: Operation not supported: read offset=30453760, buflen=4096 00:16:38.195 fio: pid=87550, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:16:38.195 14:57:11 -- target/fio.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_delete raid0 00:16:38.195 fio: pid=87549, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:16:38.195 fio: io_u error on file /dev/nvme0n3: Operation not supported: read offset=56799232, buflen=4096 00:16:38.195 14:57:11 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:16:38.195 14:57:11 -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:16:38.454 fio: pid=87547, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:16:38.454 fio: io_u error on file /dev/nvme0n1: Operation not supported: read offset=38912000, buflen=4096 00:16:38.454 14:57:11 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:16:38.454 14:57:11 -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:16:38.713 fio: pid=87548, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:16:38.713 fio: io_u error on file /dev/nvme0n2: Operation not supported: read offset=66707456, buflen=4096 00:16:38.713 00:16:38.713 job0: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=87547: Sun Dec 1 14:57:11 2024 00:16:38.713 read: IOPS=2825, BW=11.0MiB/s (11.6MB/s)(37.1MiB/3362msec) 00:16:38.713 slat (usec): min=7, max=14082, avg=19.58, stdev=236.35 00:16:38.713 clat (usec): min=109, max=3458, avg=332.88, stdev=108.17 00:16:38.713 lat (usec): min=138, max=14262, avg=352.47, stdev=257.16 00:16:38.713 clat percentiles (usec): 00:16:38.713 | 1.00th=[ 139], 5.00th=[ 151], 10.00th=[ 167], 20.00th=[ 260], 00:16:38.713 | 30.00th=[ 318], 40.00th=[ 338], 50.00th=[ 351], 60.00th=[ 367], 00:16:38.713 | 70.00th=[ 379], 80.00th=[ 396], 90.00th=[ 420], 95.00th=[ 449], 00:16:38.713 | 99.00th=[ 529], 99.50th=[ 562], 99.90th=[ 701], 99.95th=[ 1500], 00:16:38.713 | 99.99th=[ 3458] 00:16:38.713 bw ( KiB/s): min= 9728, max=11000, per=20.42%, avg=10449.33, stdev=454.00, samples=6 00:16:38.713 iops : min= 2432, max= 2750, avg=2612.33, stdev=113.50, samples=6 00:16:38.713 lat (usec) : 250=19.15%, 500=79.10%, 750=1.65%, 1000=0.04% 00:16:38.713 lat (msec) : 2=0.01%, 4=0.04% 00:16:38.713 cpu : usr=1.04%, sys=3.51%, ctx=9511, majf=0, minf=1 00:16:38.713 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:38.713 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:38.713 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:38.713 issued rwts: total=9501,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:38.713 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:38.714 job1: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=87548: Sun Dec 1 14:57:11 2024 00:16:38.714 read: IOPS=4424, BW=17.3MiB/s (18.1MB/s)(63.6MiB/3681msec) 00:16:38.714 slat (usec): min=15, max=8856, avg=22.99, stdev=152.87 00:16:38.714 clat (usec): min=120, max=7732, avg=201.34, stdev=92.19 00:16:38.714 lat (usec): min=138, max=9039, avg=224.33, stdev=178.32 00:16:38.714 clat percentiles (usec): 00:16:38.714 | 1.00th=[ 133], 5.00th=[ 143], 10.00th=[ 153], 20.00th=[ 167], 00:16:38.714 | 30.00th=[ 178], 40.00th=[ 188], 50.00th=[ 196], 60.00th=[ 204], 00:16:38.714 | 70.00th=[ 215], 80.00th=[ 227], 90.00th=[ 247], 95.00th=[ 265], 00:16:38.714 | 99.00th=[ 318], 99.50th=[ 392], 99.90th=[ 873], 99.95th=[ 1811], 00:16:38.714 | 99.99th=[ 3785] 00:16:38.714 bw ( KiB/s): min=16792, max=18601, per=34.41%, avg=17605.86, stdev=666.43, samples=7 00:16:38.714 iops : min= 4198, max= 4650, avg=4401.43, stdev=166.54, samples=7 00:16:38.714 lat (usec) : 250=91.45%, 500=8.28%, 750=0.14%, 1000=0.04% 00:16:38.714 lat (msec) : 2=0.04%, 4=0.04%, 10=0.01% 00:16:38.714 cpu : usr=1.28%, sys=7.01%, ctx=16296, majf=0, minf=2 00:16:38.714 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:38.714 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:38.714 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:38.714 issued rwts: total=16287,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:38.714 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:38.714 job2: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=87549: Sun Dec 1 14:57:11 2024 00:16:38.714 read: IOPS=4398, BW=17.2MiB/s (18.0MB/s)(54.2MiB/3153msec) 00:16:38.714 slat (usec): min=11, max=8811, avg=16.40, stdev=100.86 00:16:38.714 clat (usec): min=132, max=2412, avg=209.55, stdev=44.49 00:16:38.714 lat (usec): min=145, max=9049, avg=225.95, stdev=110.58 00:16:38.714 clat percentiles (usec): 00:16:38.714 | 1.00th=[ 149], 5.00th=[ 167], 10.00th=[ 176], 20.00th=[ 184], 00:16:38.714 | 30.00th=[ 192], 40.00th=[ 200], 50.00th=[ 206], 60.00th=[ 212], 00:16:38.714 | 70.00th=[ 221], 80.00th=[ 231], 90.00th=[ 245], 95.00th=[ 262], 00:16:38.714 | 99.00th=[ 302], 99.50th=[ 314], 99.90th=[ 465], 99.95th=[ 898], 00:16:38.714 | 99.99th=[ 2114] 00:16:38.714 bw ( KiB/s): min=16968, max=18576, per=34.71%, avg=17761.33, stdev=632.79, samples=6 00:16:38.714 iops : min= 4242, max= 4644, avg=4440.33, stdev=158.20, samples=6 00:16:38.714 lat (usec) : 250=91.92%, 500=8.00%, 750=0.02%, 1000=0.02% 00:16:38.714 lat (msec) : 2=0.02%, 4=0.01% 00:16:38.714 cpu : usr=1.05%, sys=5.30%, ctx=13872, majf=0, minf=2 00:16:38.714 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:38.714 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:38.714 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:38.714 issued rwts: total=13868,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:38.714 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:38.714 job3: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=87550: Sun Dec 1 14:57:11 2024 00:16:38.714 read: IOPS=2529, BW=9.88MiB/s (10.4MB/s)(29.0MiB/2940msec) 00:16:38.714 slat (nsec): min=7444, max=93270, avg=16858.34, stdev=10258.00 00:16:38.714 clat (usec): min=185, max=67698, avg=376.61, stdev=783.01 00:16:38.714 lat (usec): min=195, max=67723, avg=393.47, stdev=783.23 00:16:38.714 clat percentiles (usec): 00:16:38.714 | 1.00th=[ 239], 5.00th=[ 293], 10.00th=[ 314], 20.00th=[ 334], 00:16:38.714 | 30.00th=[ 343], 40.00th=[ 355], 50.00th=[ 363], 60.00th=[ 375], 00:16:38.714 | 70.00th=[ 388], 80.00th=[ 404], 90.00th=[ 429], 95.00th=[ 453], 00:16:38.714 | 99.00th=[ 515], 99.50th=[ 537], 99.90th=[ 570], 99.95th=[ 611], 00:16:38.714 | 99.99th=[67634] 00:16:38.714 bw ( KiB/s): min= 8560, max=11008, per=19.60%, avg=10030.40, stdev=918.96, samples=5 00:16:38.714 iops : min= 2140, max= 2752, avg=2507.60, stdev=229.74, samples=5 00:16:38.714 lat (usec) : 250=1.52%, 500=97.08%, 750=1.36% 00:16:38.714 lat (msec) : 4=0.01%, 100=0.01% 00:16:38.714 cpu : usr=0.85%, sys=3.71%, ctx=7438, majf=0, minf=2 00:16:38.714 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:38.714 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:38.714 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:38.714 issued rwts: total=7436,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:38.714 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:38.714 00:16:38.714 Run status group 0 (all jobs): 00:16:38.714 READ: bw=50.0MiB/s (52.4MB/s), 9.88MiB/s-17.3MiB/s (10.4MB/s-18.1MB/s), io=184MiB (193MB), run=2940-3681msec 00:16:38.714 00:16:38.714 Disk stats (read/write): 00:16:38.714 nvme0n1: ios=8251/0, merge=0/0, ticks=2949/0, in_queue=2949, util=95.25% 00:16:38.714 nvme0n2: ios=15926/0, merge=0/0, ticks=3335/0, in_queue=3335, util=95.53% 00:16:38.714 nvme0n3: ios=13730/0, merge=0/0, ticks=2929/0, in_queue=2929, util=96.36% 00:16:38.714 nvme0n4: ios=7251/0, merge=0/0, ticks=2700/0, in_queue=2700, util=96.79% 00:16:38.973 14:57:11 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:16:38.973 14:57:11 -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:16:39.232 14:57:12 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:16:39.232 14:57:12 -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:16:39.491 14:57:12 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:16:39.491 14:57:12 -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:16:39.751 14:57:12 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:16:39.751 14:57:12 -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:16:40.009 14:57:12 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:16:40.009 14:57:12 -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:16:40.267 14:57:13 -- target/fio.sh@69 -- # fio_status=0 00:16:40.267 14:57:13 -- target/fio.sh@70 -- # wait 87506 00:16:40.267 14:57:13 -- target/fio.sh@70 -- # fio_status=4 00:16:40.267 14:57:13 -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:16:40.267 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:40.267 14:57:13 -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:16:40.267 14:57:13 -- common/autotest_common.sh@1208 -- # local i=0 00:16:40.267 14:57:13 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:16:40.267 14:57:13 -- common/autotest_common.sh@1209 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:40.268 14:57:13 -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:40.268 14:57:13 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:16:40.268 nvmf hotplug test: fio failed as expected 00:16:40.268 14:57:13 -- common/autotest_common.sh@1220 -- # return 0 00:16:40.268 14:57:13 -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:16:40.268 14:57:13 -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:16:40.268 14:57:13 -- target/fio.sh@83 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:40.527 14:57:13 -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:16:40.527 14:57:13 -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:16:40.527 14:57:13 -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:16:40.527 14:57:13 -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:16:40.527 14:57:13 -- target/fio.sh@91 -- # nvmftestfini 00:16:40.527 14:57:13 -- nvmf/common.sh@476 -- # nvmfcleanup 00:16:40.527 14:57:13 -- nvmf/common.sh@116 -- # sync 00:16:40.527 14:57:13 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:16:40.527 14:57:13 -- nvmf/common.sh@119 -- # set +e 00:16:40.527 14:57:13 -- nvmf/common.sh@120 -- # for i in {1..20} 00:16:40.527 14:57:13 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:16:40.527 rmmod nvme_tcp 00:16:40.527 rmmod nvme_fabrics 00:16:40.527 rmmod nvme_keyring 00:16:40.527 14:57:13 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:16:40.527 14:57:13 -- nvmf/common.sh@123 -- # set -e 00:16:40.527 14:57:13 -- nvmf/common.sh@124 -- # return 0 00:16:40.527 14:57:13 -- nvmf/common.sh@477 -- # '[' -n 87012 ']' 00:16:40.527 14:57:13 -- nvmf/common.sh@478 -- # killprocess 87012 00:16:40.527 14:57:13 -- common/autotest_common.sh@936 -- # '[' -z 87012 ']' 00:16:40.527 14:57:13 -- common/autotest_common.sh@940 -- # kill -0 87012 00:16:40.527 14:57:13 -- common/autotest_common.sh@941 -- # uname 00:16:40.527 14:57:13 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:16:40.527 14:57:13 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 87012 00:16:40.527 killing process with pid 87012 00:16:40.527 14:57:13 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:16:40.527 14:57:13 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:16:40.527 14:57:13 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 87012' 00:16:40.527 14:57:13 -- common/autotest_common.sh@955 -- # kill 87012 00:16:40.527 14:57:13 -- common/autotest_common.sh@960 -- # wait 87012 00:16:40.789 14:57:13 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:16:40.789 14:57:13 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:16:40.789 14:57:13 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:16:40.789 14:57:13 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:40.789 14:57:13 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:16:40.789 14:57:13 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:40.789 14:57:13 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:40.789 14:57:13 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:41.076 14:57:13 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:16:41.076 00:16:41.076 real 0m19.784s 00:16:41.076 user 1m15.892s 00:16:41.076 sys 0m8.280s 00:16:41.076 14:57:13 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:16:41.076 14:57:13 -- common/autotest_common.sh@10 -- # set +x 00:16:41.076 ************************************ 00:16:41.076 END TEST nvmf_fio_target 00:16:41.076 ************************************ 00:16:41.076 14:57:13 -- nvmf/nvmf.sh@55 -- # run_test nvmf_bdevio /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:16:41.076 14:57:13 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:16:41.076 14:57:13 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:16:41.076 14:57:13 -- common/autotest_common.sh@10 -- # set +x 00:16:41.076 ************************************ 00:16:41.076 START TEST nvmf_bdevio 00:16:41.076 ************************************ 00:16:41.076 14:57:13 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:16:41.076 * Looking for test storage... 00:16:41.076 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:16:41.076 14:57:14 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:16:41.076 14:57:14 -- common/autotest_common.sh@1690 -- # lcov --version 00:16:41.076 14:57:14 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:16:41.076 14:57:14 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:16:41.076 14:57:14 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:16:41.076 14:57:14 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:16:41.076 14:57:14 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:16:41.076 14:57:14 -- scripts/common.sh@335 -- # IFS=.-: 00:16:41.077 14:57:14 -- scripts/common.sh@335 -- # read -ra ver1 00:16:41.077 14:57:14 -- scripts/common.sh@336 -- # IFS=.-: 00:16:41.077 14:57:14 -- scripts/common.sh@336 -- # read -ra ver2 00:16:41.077 14:57:14 -- scripts/common.sh@337 -- # local 'op=<' 00:16:41.077 14:57:14 -- scripts/common.sh@339 -- # ver1_l=2 00:16:41.077 14:57:14 -- scripts/common.sh@340 -- # ver2_l=1 00:16:41.077 14:57:14 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:16:41.077 14:57:14 -- scripts/common.sh@343 -- # case "$op" in 00:16:41.077 14:57:14 -- scripts/common.sh@344 -- # : 1 00:16:41.077 14:57:14 -- scripts/common.sh@363 -- # (( v = 0 )) 00:16:41.077 14:57:14 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:41.077 14:57:14 -- scripts/common.sh@364 -- # decimal 1 00:16:41.077 14:57:14 -- scripts/common.sh@352 -- # local d=1 00:16:41.077 14:57:14 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:41.077 14:57:14 -- scripts/common.sh@354 -- # echo 1 00:16:41.077 14:57:14 -- scripts/common.sh@364 -- # ver1[v]=1 00:16:41.077 14:57:14 -- scripts/common.sh@365 -- # decimal 2 00:16:41.077 14:57:14 -- scripts/common.sh@352 -- # local d=2 00:16:41.077 14:57:14 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:41.077 14:57:14 -- scripts/common.sh@354 -- # echo 2 00:16:41.077 14:57:14 -- scripts/common.sh@365 -- # ver2[v]=2 00:16:41.077 14:57:14 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:16:41.077 14:57:14 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:16:41.077 14:57:14 -- scripts/common.sh@367 -- # return 0 00:16:41.077 14:57:14 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:41.077 14:57:14 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:16:41.077 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:41.077 --rc genhtml_branch_coverage=1 00:16:41.077 --rc genhtml_function_coverage=1 00:16:41.077 --rc genhtml_legend=1 00:16:41.077 --rc geninfo_all_blocks=1 00:16:41.077 --rc geninfo_unexecuted_blocks=1 00:16:41.077 00:16:41.077 ' 00:16:41.077 14:57:14 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:16:41.077 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:41.077 --rc genhtml_branch_coverage=1 00:16:41.077 --rc genhtml_function_coverage=1 00:16:41.077 --rc genhtml_legend=1 00:16:41.077 --rc geninfo_all_blocks=1 00:16:41.077 --rc geninfo_unexecuted_blocks=1 00:16:41.077 00:16:41.077 ' 00:16:41.077 14:57:14 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:16:41.077 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:41.077 --rc genhtml_branch_coverage=1 00:16:41.077 --rc genhtml_function_coverage=1 00:16:41.077 --rc genhtml_legend=1 00:16:41.077 --rc geninfo_all_blocks=1 00:16:41.077 --rc geninfo_unexecuted_blocks=1 00:16:41.077 00:16:41.077 ' 00:16:41.077 14:57:14 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:16:41.077 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:41.077 --rc genhtml_branch_coverage=1 00:16:41.077 --rc genhtml_function_coverage=1 00:16:41.077 --rc genhtml_legend=1 00:16:41.077 --rc geninfo_all_blocks=1 00:16:41.077 --rc geninfo_unexecuted_blocks=1 00:16:41.077 00:16:41.077 ' 00:16:41.077 14:57:14 -- target/bdevio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:16:41.077 14:57:14 -- nvmf/common.sh@7 -- # uname -s 00:16:41.077 14:57:14 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:41.077 14:57:14 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:41.077 14:57:14 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:41.077 14:57:14 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:41.077 14:57:14 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:41.077 14:57:14 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:41.077 14:57:14 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:41.077 14:57:14 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:41.077 14:57:14 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:41.077 14:57:14 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:41.348 14:57:14 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:2d843004-a791-47f3-8dd7-3d04462c368b 00:16:41.348 14:57:14 -- nvmf/common.sh@18 -- # NVME_HOSTID=2d843004-a791-47f3-8dd7-3d04462c368b 00:16:41.348 14:57:14 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:41.348 14:57:14 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:41.348 14:57:14 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:16:41.348 14:57:14 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:16:41.348 14:57:14 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:41.348 14:57:14 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:41.348 14:57:14 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:41.348 14:57:14 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:41.348 14:57:14 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:41.348 14:57:14 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:41.348 14:57:14 -- paths/export.sh@5 -- # export PATH 00:16:41.348 14:57:14 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:41.348 14:57:14 -- nvmf/common.sh@46 -- # : 0 00:16:41.348 14:57:14 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:16:41.348 14:57:14 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:16:41.348 14:57:14 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:16:41.348 14:57:14 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:41.348 14:57:14 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:41.348 14:57:14 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:16:41.348 14:57:14 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:16:41.348 14:57:14 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:16:41.348 14:57:14 -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:16:41.348 14:57:14 -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:16:41.348 14:57:14 -- target/bdevio.sh@14 -- # nvmftestinit 00:16:41.348 14:57:14 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:16:41.348 14:57:14 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:41.348 14:57:14 -- nvmf/common.sh@436 -- # prepare_net_devs 00:16:41.348 14:57:14 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:16:41.348 14:57:14 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:16:41.348 14:57:14 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:41.348 14:57:14 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:41.348 14:57:14 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:41.348 14:57:14 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:16:41.348 14:57:14 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:16:41.348 14:57:14 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:16:41.348 14:57:14 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:16:41.348 14:57:14 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:16:41.348 14:57:14 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:16:41.348 14:57:14 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:41.348 14:57:14 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:41.348 14:57:14 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:16:41.348 14:57:14 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:16:41.348 14:57:14 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:16:41.348 14:57:14 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:16:41.348 14:57:14 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:16:41.348 14:57:14 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:41.348 14:57:14 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:16:41.348 14:57:14 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:16:41.349 14:57:14 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:16:41.349 14:57:14 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:16:41.349 14:57:14 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:16:41.349 14:57:14 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:16:41.349 Cannot find device "nvmf_tgt_br" 00:16:41.349 14:57:14 -- nvmf/common.sh@154 -- # true 00:16:41.349 14:57:14 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:16:41.349 Cannot find device "nvmf_tgt_br2" 00:16:41.349 14:57:14 -- nvmf/common.sh@155 -- # true 00:16:41.349 14:57:14 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:16:41.349 14:57:14 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:16:41.349 Cannot find device "nvmf_tgt_br" 00:16:41.349 14:57:14 -- nvmf/common.sh@157 -- # true 00:16:41.349 14:57:14 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:16:41.349 Cannot find device "nvmf_tgt_br2" 00:16:41.349 14:57:14 -- nvmf/common.sh@158 -- # true 00:16:41.349 14:57:14 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:16:41.349 14:57:14 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:16:41.349 14:57:14 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:16:41.349 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:41.349 14:57:14 -- nvmf/common.sh@161 -- # true 00:16:41.349 14:57:14 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:16:41.349 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:41.349 14:57:14 -- nvmf/common.sh@162 -- # true 00:16:41.349 14:57:14 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:16:41.349 14:57:14 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:16:41.349 14:57:14 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:16:41.349 14:57:14 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:16:41.349 14:57:14 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:16:41.349 14:57:14 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:16:41.349 14:57:14 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:16:41.349 14:57:14 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:16:41.349 14:57:14 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:16:41.349 14:57:14 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:16:41.349 14:57:14 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:16:41.349 14:57:14 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:16:41.349 14:57:14 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:16:41.349 14:57:14 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:16:41.349 14:57:14 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:16:41.349 14:57:14 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:16:41.349 14:57:14 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:16:41.349 14:57:14 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:16:41.349 14:57:14 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:16:41.608 14:57:14 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:16:41.608 14:57:14 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:16:41.608 14:57:14 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:16:41.608 14:57:14 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:16:41.608 14:57:14 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:16:41.608 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:41.608 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.067 ms 00:16:41.608 00:16:41.608 --- 10.0.0.2 ping statistics --- 00:16:41.608 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:41.608 rtt min/avg/max/mdev = 0.067/0.067/0.067/0.000 ms 00:16:41.608 14:57:14 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:16:41.608 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:16:41.608 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.065 ms 00:16:41.608 00:16:41.608 --- 10.0.0.3 ping statistics --- 00:16:41.608 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:41.608 rtt min/avg/max/mdev = 0.065/0.065/0.065/0.000 ms 00:16:41.608 14:57:14 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:16:41.608 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:41.608 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.028 ms 00:16:41.608 00:16:41.608 --- 10.0.0.1 ping statistics --- 00:16:41.608 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:41.608 rtt min/avg/max/mdev = 0.028/0.028/0.028/0.000 ms 00:16:41.608 14:57:14 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:41.608 14:57:14 -- nvmf/common.sh@421 -- # return 0 00:16:41.608 14:57:14 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:16:41.608 14:57:14 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:41.608 14:57:14 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:16:41.608 14:57:14 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:16:41.608 14:57:14 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:41.608 14:57:14 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:16:41.608 14:57:14 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:16:41.608 14:57:14 -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:16:41.608 14:57:14 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:16:41.608 14:57:14 -- common/autotest_common.sh@722 -- # xtrace_disable 00:16:41.608 14:57:14 -- common/autotest_common.sh@10 -- # set +x 00:16:41.608 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:41.608 14:57:14 -- nvmf/common.sh@469 -- # nvmfpid=87882 00:16:41.608 14:57:14 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x78 00:16:41.608 14:57:14 -- nvmf/common.sh@470 -- # waitforlisten 87882 00:16:41.608 14:57:14 -- common/autotest_common.sh@829 -- # '[' -z 87882 ']' 00:16:41.608 14:57:14 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:41.608 14:57:14 -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:41.609 14:57:14 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:41.609 14:57:14 -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:41.609 14:57:14 -- common/autotest_common.sh@10 -- # set +x 00:16:41.609 [2024-12-01 14:57:14.599041] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:16:41.609 [2024-12-01 14:57:14.599141] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:41.867 [2024-12-01 14:57:14.735188] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:16:41.867 [2024-12-01 14:57:14.792798] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:16:41.867 [2024-12-01 14:57:14.793404] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:41.867 [2024-12-01 14:57:14.793426] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:41.867 [2024-12-01 14:57:14.793435] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:41.867 [2024-12-01 14:57:14.793628] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:16:41.867 [2024-12-01 14:57:14.793833] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:16:41.867 [2024-12-01 14:57:14.793941] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:16:41.867 [2024-12-01 14:57:14.793950] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:16:42.433 14:57:15 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:42.433 14:57:15 -- common/autotest_common.sh@862 -- # return 0 00:16:42.433 14:57:15 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:16:42.433 14:57:15 -- common/autotest_common.sh@728 -- # xtrace_disable 00:16:42.433 14:57:15 -- common/autotest_common.sh@10 -- # set +x 00:16:42.692 14:57:15 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:42.692 14:57:15 -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:16:42.692 14:57:15 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:42.692 14:57:15 -- common/autotest_common.sh@10 -- # set +x 00:16:42.692 [2024-12-01 14:57:15.581957] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:42.692 14:57:15 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:42.692 14:57:15 -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:16:42.692 14:57:15 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:42.692 14:57:15 -- common/autotest_common.sh@10 -- # set +x 00:16:42.692 Malloc0 00:16:42.692 14:57:15 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:42.692 14:57:15 -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:16:42.692 14:57:15 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:42.692 14:57:15 -- common/autotest_common.sh@10 -- # set +x 00:16:42.692 14:57:15 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:42.692 14:57:15 -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:16:42.692 14:57:15 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:42.692 14:57:15 -- common/autotest_common.sh@10 -- # set +x 00:16:42.692 14:57:15 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:42.692 14:57:15 -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:42.692 14:57:15 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:42.692 14:57:15 -- common/autotest_common.sh@10 -- # set +x 00:16:42.692 [2024-12-01 14:57:15.643920] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:42.692 14:57:15 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:42.692 14:57:15 -- target/bdevio.sh@24 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:16:42.692 14:57:15 -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:16:42.692 14:57:15 -- nvmf/common.sh@520 -- # config=() 00:16:42.692 14:57:15 -- nvmf/common.sh@520 -- # local subsystem config 00:16:42.692 14:57:15 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:16:42.692 14:57:15 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:16:42.692 { 00:16:42.692 "params": { 00:16:42.692 "name": "Nvme$subsystem", 00:16:42.692 "trtype": "$TEST_TRANSPORT", 00:16:42.692 "traddr": "$NVMF_FIRST_TARGET_IP", 00:16:42.692 "adrfam": "ipv4", 00:16:42.692 "trsvcid": "$NVMF_PORT", 00:16:42.692 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:16:42.692 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:16:42.692 "hdgst": ${hdgst:-false}, 00:16:42.692 "ddgst": ${ddgst:-false} 00:16:42.692 }, 00:16:42.692 "method": "bdev_nvme_attach_controller" 00:16:42.692 } 00:16:42.692 EOF 00:16:42.692 )") 00:16:42.692 14:57:15 -- nvmf/common.sh@542 -- # cat 00:16:42.692 14:57:15 -- nvmf/common.sh@544 -- # jq . 00:16:42.692 14:57:15 -- nvmf/common.sh@545 -- # IFS=, 00:16:42.692 14:57:15 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:16:42.692 "params": { 00:16:42.692 "name": "Nvme1", 00:16:42.692 "trtype": "tcp", 00:16:42.692 "traddr": "10.0.0.2", 00:16:42.692 "adrfam": "ipv4", 00:16:42.692 "trsvcid": "4420", 00:16:42.692 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:16:42.692 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:16:42.692 "hdgst": false, 00:16:42.692 "ddgst": false 00:16:42.692 }, 00:16:42.692 "method": "bdev_nvme_attach_controller" 00:16:42.692 }' 00:16:42.693 [2024-12-01 14:57:15.697949] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:16:42.693 [2024-12-01 14:57:15.698198] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid87936 ] 00:16:42.951 [2024-12-01 14:57:15.833057] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:16:42.951 [2024-12-01 14:57:15.915029] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:16:42.951 [2024-12-01 14:57:15.915145] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:16:42.951 [2024-12-01 14:57:15.915159] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:43.210 [2024-12-01 14:57:16.120044] rpc.c: 181:spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:16:43.210 [2024-12-01 14:57:16.120098] rpc.c: 90:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:16:43.210 I/O targets: 00:16:43.210 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:16:43.210 00:16:43.210 00:16:43.210 CUnit - A unit testing framework for C - Version 2.1-3 00:16:43.210 http://cunit.sourceforge.net/ 00:16:43.210 00:16:43.210 00:16:43.210 Suite: bdevio tests on: Nvme1n1 00:16:43.210 Test: blockdev write read block ...passed 00:16:43.210 Test: blockdev write zeroes read block ...passed 00:16:43.210 Test: blockdev write zeroes read no split ...passed 00:16:43.210 Test: blockdev write zeroes read split ...passed 00:16:43.210 Test: blockdev write zeroes read split partial ...passed 00:16:43.210 Test: blockdev reset ...[2024-12-01 14:57:16.238407] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:43.210 [2024-12-01 14:57:16.238847] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x797ed0 (9): Bad file descriptor 00:16:43.210 [2024-12-01 14:57:16.250517] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:16:43.210 passed 00:16:43.210 Test: blockdev write read 8 blocks ...passed 00:16:43.210 Test: blockdev write read size > 128k ...passed 00:16:43.210 Test: blockdev write read invalid size ...passed 00:16:43.210 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:16:43.210 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:16:43.210 Test: blockdev write read max offset ...passed 00:16:43.468 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:16:43.468 Test: blockdev writev readv 8 blocks ...passed 00:16:43.468 Test: blockdev writev readv 30 x 1block ...passed 00:16:43.468 Test: blockdev writev readv block ...passed 00:16:43.468 Test: blockdev writev readv size > 128k ...passed 00:16:43.468 Test: blockdev writev readv size > 128k in two iovs ...passed 00:16:43.468 Test: blockdev comparev and writev ...[2024-12-01 14:57:16.425058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:16:43.468 [2024-12-01 14:57:16.425381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:43.468 [2024-12-01 14:57:16.425469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:16:43.468 [2024-12-01 14:57:16.425564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:43.468 [2024-12-01 14:57:16.426051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:16:43.468 [2024-12-01 14:57:16.426159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:16:43.468 [2024-12-01 14:57:16.426267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:16:43.468 [2024-12-01 14:57:16.426352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:16:43.468 [2024-12-01 14:57:16.426862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:16:43.468 [2024-12-01 14:57:16.427097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:16:43.468 [2024-12-01 14:57:16.427202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:16:43.468 [2024-12-01 14:57:16.427258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:16:43.468 [2024-12-01 14:57:16.427678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:16:43.468 [2024-12-01 14:57:16.427878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:16:43.468 [2024-12-01 14:57:16.428138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:16:43.468 [2024-12-01 14:57:16.428354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:16:43.468 passed 00:16:43.468 Test: blockdev nvme passthru rw ...passed 00:16:43.468 Test: blockdev nvme passthru vendor specific ...[2024-12-01 14:57:16.512286] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:16:43.468 [2024-12-01 14:57:16.512556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:16:43.468 [2024-12-01 14:57:16.513044] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:16:43.468 [2024-12-01 14:57:16.513327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:16:43.468 [2024-12-01 14:57:16.513727] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:16:43.468 [2024-12-01 14:57:16.513983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:16:43.468 [2024-12-01 14:57:16.514387] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:16:43.468 [2024-12-01 14:57:16.514670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:16:43.468 passed 00:16:43.468 Test: blockdev nvme admin passthru ...passed 00:16:43.468 Test: blockdev copy ...passed 00:16:43.468 00:16:43.468 Run Summary: Type Total Ran Passed Failed Inactive 00:16:43.468 suites 1 1 n/a 0 0 00:16:43.468 tests 23 23 23 0 0 00:16:43.468 asserts 152 152 152 0 n/a 00:16:43.468 00:16:43.468 Elapsed time = 0.896 seconds 00:16:43.727 14:57:16 -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:43.727 14:57:16 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:43.727 14:57:16 -- common/autotest_common.sh@10 -- # set +x 00:16:43.986 14:57:16 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:43.986 14:57:16 -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:16:43.986 14:57:16 -- target/bdevio.sh@30 -- # nvmftestfini 00:16:43.986 14:57:16 -- nvmf/common.sh@476 -- # nvmfcleanup 00:16:43.986 14:57:16 -- nvmf/common.sh@116 -- # sync 00:16:43.986 14:57:16 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:16:43.986 14:57:16 -- nvmf/common.sh@119 -- # set +e 00:16:43.986 14:57:16 -- nvmf/common.sh@120 -- # for i in {1..20} 00:16:43.986 14:57:16 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:16:43.986 rmmod nvme_tcp 00:16:43.986 rmmod nvme_fabrics 00:16:43.986 rmmod nvme_keyring 00:16:43.986 14:57:16 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:16:43.986 14:57:16 -- nvmf/common.sh@123 -- # set -e 00:16:43.986 14:57:16 -- nvmf/common.sh@124 -- # return 0 00:16:43.986 14:57:16 -- nvmf/common.sh@477 -- # '[' -n 87882 ']' 00:16:43.986 14:57:16 -- nvmf/common.sh@478 -- # killprocess 87882 00:16:43.986 14:57:16 -- common/autotest_common.sh@936 -- # '[' -z 87882 ']' 00:16:43.986 14:57:16 -- common/autotest_common.sh@940 -- # kill -0 87882 00:16:43.986 14:57:16 -- common/autotest_common.sh@941 -- # uname 00:16:43.986 14:57:16 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:16:43.986 14:57:16 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 87882 00:16:43.986 killing process with pid 87882 00:16:43.986 14:57:17 -- common/autotest_common.sh@942 -- # process_name=reactor_3 00:16:43.986 14:57:17 -- common/autotest_common.sh@946 -- # '[' reactor_3 = sudo ']' 00:16:43.986 14:57:17 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 87882' 00:16:43.986 14:57:17 -- common/autotest_common.sh@955 -- # kill 87882 00:16:43.986 14:57:17 -- common/autotest_common.sh@960 -- # wait 87882 00:16:44.244 14:57:17 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:16:44.244 14:57:17 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:16:44.244 14:57:17 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:16:44.244 14:57:17 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:44.244 14:57:17 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:16:44.244 14:57:17 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:44.244 14:57:17 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:44.244 14:57:17 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:44.244 14:57:17 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:16:44.244 ************************************ 00:16:44.244 END TEST nvmf_bdevio 00:16:44.244 ************************************ 00:16:44.244 00:16:44.244 real 0m3.344s 00:16:44.244 user 0m12.124s 00:16:44.244 sys 0m0.849s 00:16:44.244 14:57:17 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:16:44.244 14:57:17 -- common/autotest_common.sh@10 -- # set +x 00:16:44.503 14:57:17 -- nvmf/nvmf.sh@57 -- # '[' tcp = tcp ']' 00:16:44.503 14:57:17 -- nvmf/nvmf.sh@58 -- # run_test nvmf_bdevio_no_huge /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:16:44.503 14:57:17 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:16:44.503 14:57:17 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:16:44.503 14:57:17 -- common/autotest_common.sh@10 -- # set +x 00:16:44.503 ************************************ 00:16:44.503 START TEST nvmf_bdevio_no_huge 00:16:44.503 ************************************ 00:16:44.503 14:57:17 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:16:44.503 * Looking for test storage... 00:16:44.503 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:16:44.503 14:57:17 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:16:44.503 14:57:17 -- common/autotest_common.sh@1690 -- # lcov --version 00:16:44.503 14:57:17 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:16:44.503 14:57:17 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:16:44.503 14:57:17 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:16:44.503 14:57:17 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:16:44.503 14:57:17 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:16:44.503 14:57:17 -- scripts/common.sh@335 -- # IFS=.-: 00:16:44.503 14:57:17 -- scripts/common.sh@335 -- # read -ra ver1 00:16:44.503 14:57:17 -- scripts/common.sh@336 -- # IFS=.-: 00:16:44.503 14:57:17 -- scripts/common.sh@336 -- # read -ra ver2 00:16:44.503 14:57:17 -- scripts/common.sh@337 -- # local 'op=<' 00:16:44.503 14:57:17 -- scripts/common.sh@339 -- # ver1_l=2 00:16:44.503 14:57:17 -- scripts/common.sh@340 -- # ver2_l=1 00:16:44.503 14:57:17 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:16:44.503 14:57:17 -- scripts/common.sh@343 -- # case "$op" in 00:16:44.503 14:57:17 -- scripts/common.sh@344 -- # : 1 00:16:44.503 14:57:17 -- scripts/common.sh@363 -- # (( v = 0 )) 00:16:44.503 14:57:17 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:44.503 14:57:17 -- scripts/common.sh@364 -- # decimal 1 00:16:44.503 14:57:17 -- scripts/common.sh@352 -- # local d=1 00:16:44.503 14:57:17 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:44.503 14:57:17 -- scripts/common.sh@354 -- # echo 1 00:16:44.503 14:57:17 -- scripts/common.sh@364 -- # ver1[v]=1 00:16:44.503 14:57:17 -- scripts/common.sh@365 -- # decimal 2 00:16:44.503 14:57:17 -- scripts/common.sh@352 -- # local d=2 00:16:44.503 14:57:17 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:44.503 14:57:17 -- scripts/common.sh@354 -- # echo 2 00:16:44.503 14:57:17 -- scripts/common.sh@365 -- # ver2[v]=2 00:16:44.503 14:57:17 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:16:44.503 14:57:17 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:16:44.503 14:57:17 -- scripts/common.sh@367 -- # return 0 00:16:44.503 14:57:17 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:44.503 14:57:17 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:16:44.503 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:44.503 --rc genhtml_branch_coverage=1 00:16:44.503 --rc genhtml_function_coverage=1 00:16:44.503 --rc genhtml_legend=1 00:16:44.503 --rc geninfo_all_blocks=1 00:16:44.503 --rc geninfo_unexecuted_blocks=1 00:16:44.503 00:16:44.503 ' 00:16:44.503 14:57:17 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:16:44.503 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:44.503 --rc genhtml_branch_coverage=1 00:16:44.503 --rc genhtml_function_coverage=1 00:16:44.503 --rc genhtml_legend=1 00:16:44.503 --rc geninfo_all_blocks=1 00:16:44.503 --rc geninfo_unexecuted_blocks=1 00:16:44.503 00:16:44.503 ' 00:16:44.503 14:57:17 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:16:44.503 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:44.503 --rc genhtml_branch_coverage=1 00:16:44.503 --rc genhtml_function_coverage=1 00:16:44.503 --rc genhtml_legend=1 00:16:44.503 --rc geninfo_all_blocks=1 00:16:44.503 --rc geninfo_unexecuted_blocks=1 00:16:44.503 00:16:44.503 ' 00:16:44.503 14:57:17 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:16:44.503 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:44.503 --rc genhtml_branch_coverage=1 00:16:44.503 --rc genhtml_function_coverage=1 00:16:44.503 --rc genhtml_legend=1 00:16:44.503 --rc geninfo_all_blocks=1 00:16:44.503 --rc geninfo_unexecuted_blocks=1 00:16:44.503 00:16:44.503 ' 00:16:44.503 14:57:17 -- target/bdevio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:16:44.503 14:57:17 -- nvmf/common.sh@7 -- # uname -s 00:16:44.503 14:57:17 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:44.503 14:57:17 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:44.503 14:57:17 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:44.503 14:57:17 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:44.503 14:57:17 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:44.503 14:57:17 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:44.503 14:57:17 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:44.503 14:57:17 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:44.503 14:57:17 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:44.503 14:57:17 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:44.503 14:57:17 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:2d843004-a791-47f3-8dd7-3d04462c368b 00:16:44.503 14:57:17 -- nvmf/common.sh@18 -- # NVME_HOSTID=2d843004-a791-47f3-8dd7-3d04462c368b 00:16:44.503 14:57:17 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:44.503 14:57:17 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:44.503 14:57:17 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:16:44.503 14:57:17 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:16:44.503 14:57:17 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:44.503 14:57:17 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:44.503 14:57:17 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:44.503 14:57:17 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:44.504 14:57:17 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:44.504 14:57:17 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:44.504 14:57:17 -- paths/export.sh@5 -- # export PATH 00:16:44.504 14:57:17 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:44.504 14:57:17 -- nvmf/common.sh@46 -- # : 0 00:16:44.504 14:57:17 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:16:44.504 14:57:17 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:16:44.504 14:57:17 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:16:44.504 14:57:17 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:44.504 14:57:17 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:44.504 14:57:17 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:16:44.504 14:57:17 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:16:44.504 14:57:17 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:16:44.504 14:57:17 -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:16:44.504 14:57:17 -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:16:44.504 14:57:17 -- target/bdevio.sh@14 -- # nvmftestinit 00:16:44.504 14:57:17 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:16:44.504 14:57:17 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:44.504 14:57:17 -- nvmf/common.sh@436 -- # prepare_net_devs 00:16:44.504 14:57:17 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:16:44.504 14:57:17 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:16:44.504 14:57:17 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:44.504 14:57:17 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:44.504 14:57:17 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:44.504 14:57:17 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:16:44.504 14:57:17 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:16:44.504 14:57:17 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:16:44.504 14:57:17 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:16:44.504 14:57:17 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:16:44.504 14:57:17 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:16:44.504 14:57:17 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:44.504 14:57:17 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:44.504 14:57:17 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:16:44.504 14:57:17 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:16:44.504 14:57:17 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:16:44.504 14:57:17 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:16:44.504 14:57:17 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:16:44.504 14:57:17 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:44.504 14:57:17 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:16:44.504 14:57:17 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:16:44.504 14:57:17 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:16:44.504 14:57:17 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:16:44.504 14:57:17 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:16:44.504 14:57:17 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:16:44.504 Cannot find device "nvmf_tgt_br" 00:16:44.504 14:57:17 -- nvmf/common.sh@154 -- # true 00:16:44.504 14:57:17 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:16:44.504 Cannot find device "nvmf_tgt_br2" 00:16:44.504 14:57:17 -- nvmf/common.sh@155 -- # true 00:16:44.504 14:57:17 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:16:44.504 14:57:17 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:16:44.763 Cannot find device "nvmf_tgt_br" 00:16:44.763 14:57:17 -- nvmf/common.sh@157 -- # true 00:16:44.763 14:57:17 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:16:44.763 Cannot find device "nvmf_tgt_br2" 00:16:44.763 14:57:17 -- nvmf/common.sh@158 -- # true 00:16:44.763 14:57:17 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:16:44.763 14:57:17 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:16:44.763 14:57:17 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:16:44.763 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:44.763 14:57:17 -- nvmf/common.sh@161 -- # true 00:16:44.763 14:57:17 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:16:44.763 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:44.763 14:57:17 -- nvmf/common.sh@162 -- # true 00:16:44.763 14:57:17 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:16:44.763 14:57:17 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:16:44.763 14:57:17 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:16:44.763 14:57:17 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:16:44.763 14:57:17 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:16:44.763 14:57:17 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:16:44.763 14:57:17 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:16:44.763 14:57:17 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:16:44.763 14:57:17 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:16:44.763 14:57:17 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:16:44.763 14:57:17 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:16:44.763 14:57:17 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:16:44.763 14:57:17 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:16:44.763 14:57:17 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:16:44.763 14:57:17 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:16:44.763 14:57:17 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:16:44.763 14:57:17 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:16:44.763 14:57:17 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:16:44.763 14:57:17 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:16:44.763 14:57:17 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:16:44.763 14:57:17 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:16:44.763 14:57:17 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:16:44.763 14:57:17 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:16:44.763 14:57:17 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:16:44.763 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:44.763 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.099 ms 00:16:44.763 00:16:44.763 --- 10.0.0.2 ping statistics --- 00:16:44.763 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:44.763 rtt min/avg/max/mdev = 0.099/0.099/0.099/0.000 ms 00:16:44.763 14:57:17 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:16:44.763 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:16:44.763 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.042 ms 00:16:44.763 00:16:44.763 --- 10.0.0.3 ping statistics --- 00:16:44.763 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:44.763 rtt min/avg/max/mdev = 0.042/0.042/0.042/0.000 ms 00:16:44.763 14:57:17 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:16:44.763 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:44.763 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.045 ms 00:16:44.763 00:16:44.763 --- 10.0.0.1 ping statistics --- 00:16:44.763 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:44.763 rtt min/avg/max/mdev = 0.045/0.045/0.045/0.000 ms 00:16:44.763 14:57:17 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:44.763 14:57:17 -- nvmf/common.sh@421 -- # return 0 00:16:44.763 14:57:17 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:16:44.763 14:57:17 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:44.763 14:57:17 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:16:44.763 14:57:17 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:16:44.763 14:57:17 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:44.763 14:57:17 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:16:44.763 14:57:17 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:16:45.022 14:57:17 -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:16:45.022 14:57:17 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:16:45.022 14:57:17 -- common/autotest_common.sh@722 -- # xtrace_disable 00:16:45.022 14:57:17 -- common/autotest_common.sh@10 -- # set +x 00:16:45.022 14:57:17 -- nvmf/common.sh@469 -- # nvmfpid=88137 00:16:45.022 14:57:17 -- nvmf/common.sh@470 -- # waitforlisten 88137 00:16:45.022 14:57:17 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --no-huge -s 1024 -m 0x78 00:16:45.022 14:57:17 -- common/autotest_common.sh@829 -- # '[' -z 88137 ']' 00:16:45.022 14:57:17 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:45.022 14:57:17 -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:45.022 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:45.022 14:57:17 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:45.022 14:57:17 -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:45.022 14:57:17 -- common/autotest_common.sh@10 -- # set +x 00:16:45.022 [2024-12-01 14:57:17.948105] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:16:45.022 [2024-12-01 14:57:17.948198] [ DPDK EAL parameters: nvmf -c 0x78 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk0 --proc-type=auto ] 00:16:45.022 [2024-12-01 14:57:18.096428] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:16:45.281 [2024-12-01 14:57:18.223250] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:16:45.281 [2024-12-01 14:57:18.223440] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:45.281 [2024-12-01 14:57:18.223458] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:45.281 [2024-12-01 14:57:18.223470] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:45.281 [2024-12-01 14:57:18.223632] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:16:45.281 [2024-12-01 14:57:18.224353] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:16:45.281 [2024-12-01 14:57:18.224488] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:16:45.281 [2024-12-01 14:57:18.224490] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:16:45.849 14:57:18 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:45.849 14:57:18 -- common/autotest_common.sh@862 -- # return 0 00:16:45.849 14:57:18 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:16:45.849 14:57:18 -- common/autotest_common.sh@728 -- # xtrace_disable 00:16:45.849 14:57:18 -- common/autotest_common.sh@10 -- # set +x 00:16:46.108 14:57:18 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:46.108 14:57:18 -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:16:46.108 14:57:18 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:46.108 14:57:18 -- common/autotest_common.sh@10 -- # set +x 00:16:46.108 [2024-12-01 14:57:18.994314] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:46.108 14:57:19 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:46.108 14:57:19 -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:16:46.108 14:57:19 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:46.108 14:57:19 -- common/autotest_common.sh@10 -- # set +x 00:16:46.108 Malloc0 00:16:46.108 14:57:19 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:46.108 14:57:19 -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:16:46.108 14:57:19 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:46.108 14:57:19 -- common/autotest_common.sh@10 -- # set +x 00:16:46.108 14:57:19 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:46.108 14:57:19 -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:16:46.108 14:57:19 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:46.108 14:57:19 -- common/autotest_common.sh@10 -- # set +x 00:16:46.108 14:57:19 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:46.108 14:57:19 -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:46.108 14:57:19 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:46.108 14:57:19 -- common/autotest_common.sh@10 -- # set +x 00:16:46.108 [2024-12-01 14:57:19.033001] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:46.108 14:57:19 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:46.108 14:57:19 -- target/bdevio.sh@24 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 --no-huge -s 1024 00:16:46.108 14:57:19 -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:16:46.108 14:57:19 -- nvmf/common.sh@520 -- # config=() 00:16:46.108 14:57:19 -- nvmf/common.sh@520 -- # local subsystem config 00:16:46.108 14:57:19 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:16:46.108 14:57:19 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:16:46.108 { 00:16:46.108 "params": { 00:16:46.108 "name": "Nvme$subsystem", 00:16:46.108 "trtype": "$TEST_TRANSPORT", 00:16:46.108 "traddr": "$NVMF_FIRST_TARGET_IP", 00:16:46.108 "adrfam": "ipv4", 00:16:46.108 "trsvcid": "$NVMF_PORT", 00:16:46.108 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:16:46.108 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:16:46.108 "hdgst": ${hdgst:-false}, 00:16:46.108 "ddgst": ${ddgst:-false} 00:16:46.108 }, 00:16:46.108 "method": "bdev_nvme_attach_controller" 00:16:46.108 } 00:16:46.108 EOF 00:16:46.108 )") 00:16:46.108 14:57:19 -- nvmf/common.sh@542 -- # cat 00:16:46.108 14:57:19 -- nvmf/common.sh@544 -- # jq . 00:16:46.108 14:57:19 -- nvmf/common.sh@545 -- # IFS=, 00:16:46.108 14:57:19 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:16:46.108 "params": { 00:16:46.108 "name": "Nvme1", 00:16:46.108 "trtype": "tcp", 00:16:46.108 "traddr": "10.0.0.2", 00:16:46.108 "adrfam": "ipv4", 00:16:46.108 "trsvcid": "4420", 00:16:46.108 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:16:46.109 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:16:46.109 "hdgst": false, 00:16:46.109 "ddgst": false 00:16:46.109 }, 00:16:46.109 "method": "bdev_nvme_attach_controller" 00:16:46.109 }' 00:16:46.109 [2024-12-01 14:57:19.093996] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:16:46.109 [2024-12-01 14:57:19.094099] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk_pid88191 ] 00:16:46.367 [2024-12-01 14:57:19.238312] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:16:46.367 [2024-12-01 14:57:19.341479] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:16:46.367 [2024-12-01 14:57:19.341619] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:16:46.367 [2024-12-01 14:57:19.341836] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:46.627 [2024-12-01 14:57:19.526647] rpc.c: 181:spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:16:46.627 [2024-12-01 14:57:19.526911] rpc.c: 90:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:16:46.627 I/O targets: 00:16:46.627 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:16:46.627 00:16:46.627 00:16:46.627 CUnit - A unit testing framework for C - Version 2.1-3 00:16:46.627 http://cunit.sourceforge.net/ 00:16:46.627 00:16:46.627 00:16:46.627 Suite: bdevio tests on: Nvme1n1 00:16:46.627 Test: blockdev write read block ...passed 00:16:46.627 Test: blockdev write zeroes read block ...passed 00:16:46.627 Test: blockdev write zeroes read no split ...passed 00:16:46.627 Test: blockdev write zeroes read split ...passed 00:16:46.627 Test: blockdev write zeroes read split partial ...passed 00:16:46.627 Test: blockdev reset ...[2024-12-01 14:57:19.658066] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:46.627 [2024-12-01 14:57:19.658262] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1aa2820 (9): Bad file descriptor 00:16:46.627 [2024-12-01 14:57:19.675472] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:16:46.627 passed 00:16:46.627 Test: blockdev write read 8 blocks ...passed 00:16:46.627 Test: blockdev write read size > 128k ...passed 00:16:46.627 Test: blockdev write read invalid size ...passed 00:16:46.627 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:16:46.627 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:16:46.627 Test: blockdev write read max offset ...passed 00:16:46.885 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:16:46.885 Test: blockdev writev readv 8 blocks ...passed 00:16:46.885 Test: blockdev writev readv 30 x 1block ...passed 00:16:46.885 Test: blockdev writev readv block ...passed 00:16:46.885 Test: blockdev writev readv size > 128k ...passed 00:16:46.885 Test: blockdev writev readv size > 128k in two iovs ...passed 00:16:46.885 Test: blockdev comparev and writev ...[2024-12-01 14:57:19.852764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:16:46.885 [2024-12-01 14:57:19.852934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:46.885 [2024-12-01 14:57:19.853069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:16:46.885 [2024-12-01 14:57:19.853254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:46.885 [2024-12-01 14:57:19.853827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:16:46.886 [2024-12-01 14:57:19.853849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:16:46.886 [2024-12-01 14:57:19.853866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:16:46.886 [2024-12-01 14:57:19.853876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:16:46.886 [2024-12-01 14:57:19.854201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:16:46.886 [2024-12-01 14:57:19.854221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:16:46.886 [2024-12-01 14:57:19.854236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:16:46.886 [2024-12-01 14:57:19.854246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:16:46.886 [2024-12-01 14:57:19.854532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:16:46.886 [2024-12-01 14:57:19.854551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:16:46.886 [2024-12-01 14:57:19.854566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:16:46.886 [2024-12-01 14:57:19.854575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:16:46.886 passed 00:16:46.886 Test: blockdev nvme passthru rw ...passed 00:16:46.886 Test: blockdev nvme passthru vendor specific ...[2024-12-01 14:57:19.939308] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:16:46.886 [2024-12-01 14:57:19.939337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:16:46.886 [2024-12-01 14:57:19.939456] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:16:46.886 [2024-12-01 14:57:19.939478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:16:46.886 [2024-12-01 14:57:19.939597] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:16:46.886 [2024-12-01 14:57:19.939620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:16:46.886 [2024-12-01 14:57:19.939728] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:16:46.886 [2024-12-01 14:57:19.939759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:16:46.886 passed 00:16:46.886 Test: blockdev nvme admin passthru ...passed 00:16:47.144 Test: blockdev copy ...passed 00:16:47.144 00:16:47.144 Run Summary: Type Total Ran Passed Failed Inactive 00:16:47.144 suites 1 1 n/a 0 0 00:16:47.144 tests 23 23 23 0 0 00:16:47.144 asserts 152 152 152 0 n/a 00:16:47.144 00:16:47.144 Elapsed time = 0.943 seconds 00:16:47.402 14:57:20 -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:47.402 14:57:20 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:47.402 14:57:20 -- common/autotest_common.sh@10 -- # set +x 00:16:47.402 14:57:20 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:47.402 14:57:20 -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:16:47.402 14:57:20 -- target/bdevio.sh@30 -- # nvmftestfini 00:16:47.402 14:57:20 -- nvmf/common.sh@476 -- # nvmfcleanup 00:16:47.402 14:57:20 -- nvmf/common.sh@116 -- # sync 00:16:47.402 14:57:20 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:16:47.402 14:57:20 -- nvmf/common.sh@119 -- # set +e 00:16:47.402 14:57:20 -- nvmf/common.sh@120 -- # for i in {1..20} 00:16:47.402 14:57:20 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:16:47.402 rmmod nvme_tcp 00:16:47.402 rmmod nvme_fabrics 00:16:47.661 rmmod nvme_keyring 00:16:47.661 14:57:20 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:16:47.661 14:57:20 -- nvmf/common.sh@123 -- # set -e 00:16:47.661 14:57:20 -- nvmf/common.sh@124 -- # return 0 00:16:47.661 14:57:20 -- nvmf/common.sh@477 -- # '[' -n 88137 ']' 00:16:47.661 14:57:20 -- nvmf/common.sh@478 -- # killprocess 88137 00:16:47.661 14:57:20 -- common/autotest_common.sh@936 -- # '[' -z 88137 ']' 00:16:47.661 14:57:20 -- common/autotest_common.sh@940 -- # kill -0 88137 00:16:47.661 14:57:20 -- common/autotest_common.sh@941 -- # uname 00:16:47.661 14:57:20 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:16:47.661 14:57:20 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 88137 00:16:47.661 14:57:20 -- common/autotest_common.sh@942 -- # process_name=reactor_3 00:16:47.661 14:57:20 -- common/autotest_common.sh@946 -- # '[' reactor_3 = sudo ']' 00:16:47.661 killing process with pid 88137 00:16:47.661 14:57:20 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 88137' 00:16:47.661 14:57:20 -- common/autotest_common.sh@955 -- # kill 88137 00:16:47.661 14:57:20 -- common/autotest_common.sh@960 -- # wait 88137 00:16:47.918 14:57:20 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:16:47.919 14:57:20 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:16:47.919 14:57:20 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:16:47.919 14:57:20 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:47.919 14:57:20 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:16:47.919 14:57:20 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:47.919 14:57:20 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:47.919 14:57:20 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:47.919 14:57:21 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:16:47.919 00:16:47.919 real 0m3.633s 00:16:47.919 user 0m13.009s 00:16:47.919 sys 0m1.400s 00:16:47.919 14:57:21 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:16:47.919 ************************************ 00:16:47.919 14:57:21 -- common/autotest_common.sh@10 -- # set +x 00:16:47.919 END TEST nvmf_bdevio_no_huge 00:16:47.919 ************************************ 00:16:48.177 14:57:21 -- nvmf/nvmf.sh@59 -- # run_test nvmf_tls /home/vagrant/spdk_repo/spdk/test/nvmf/target/tls.sh --transport=tcp 00:16:48.177 14:57:21 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:16:48.177 14:57:21 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:16:48.177 14:57:21 -- common/autotest_common.sh@10 -- # set +x 00:16:48.177 ************************************ 00:16:48.177 START TEST nvmf_tls 00:16:48.177 ************************************ 00:16:48.177 14:57:21 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/tls.sh --transport=tcp 00:16:48.177 * Looking for test storage... 00:16:48.177 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:16:48.177 14:57:21 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:16:48.177 14:57:21 -- common/autotest_common.sh@1690 -- # lcov --version 00:16:48.177 14:57:21 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:16:48.177 14:57:21 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:16:48.177 14:57:21 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:16:48.177 14:57:21 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:16:48.177 14:57:21 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:16:48.177 14:57:21 -- scripts/common.sh@335 -- # IFS=.-: 00:16:48.177 14:57:21 -- scripts/common.sh@335 -- # read -ra ver1 00:16:48.177 14:57:21 -- scripts/common.sh@336 -- # IFS=.-: 00:16:48.177 14:57:21 -- scripts/common.sh@336 -- # read -ra ver2 00:16:48.177 14:57:21 -- scripts/common.sh@337 -- # local 'op=<' 00:16:48.177 14:57:21 -- scripts/common.sh@339 -- # ver1_l=2 00:16:48.177 14:57:21 -- scripts/common.sh@340 -- # ver2_l=1 00:16:48.177 14:57:21 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:16:48.177 14:57:21 -- scripts/common.sh@343 -- # case "$op" in 00:16:48.177 14:57:21 -- scripts/common.sh@344 -- # : 1 00:16:48.177 14:57:21 -- scripts/common.sh@363 -- # (( v = 0 )) 00:16:48.177 14:57:21 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:48.177 14:57:21 -- scripts/common.sh@364 -- # decimal 1 00:16:48.177 14:57:21 -- scripts/common.sh@352 -- # local d=1 00:16:48.177 14:57:21 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:48.177 14:57:21 -- scripts/common.sh@354 -- # echo 1 00:16:48.177 14:57:21 -- scripts/common.sh@364 -- # ver1[v]=1 00:16:48.177 14:57:21 -- scripts/common.sh@365 -- # decimal 2 00:16:48.177 14:57:21 -- scripts/common.sh@352 -- # local d=2 00:16:48.177 14:57:21 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:48.177 14:57:21 -- scripts/common.sh@354 -- # echo 2 00:16:48.177 14:57:21 -- scripts/common.sh@365 -- # ver2[v]=2 00:16:48.177 14:57:21 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:16:48.177 14:57:21 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:16:48.177 14:57:21 -- scripts/common.sh@367 -- # return 0 00:16:48.177 14:57:21 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:48.177 14:57:21 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:16:48.177 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:48.177 --rc genhtml_branch_coverage=1 00:16:48.177 --rc genhtml_function_coverage=1 00:16:48.177 --rc genhtml_legend=1 00:16:48.177 --rc geninfo_all_blocks=1 00:16:48.177 --rc geninfo_unexecuted_blocks=1 00:16:48.177 00:16:48.177 ' 00:16:48.177 14:57:21 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:16:48.177 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:48.178 --rc genhtml_branch_coverage=1 00:16:48.178 --rc genhtml_function_coverage=1 00:16:48.178 --rc genhtml_legend=1 00:16:48.178 --rc geninfo_all_blocks=1 00:16:48.178 --rc geninfo_unexecuted_blocks=1 00:16:48.178 00:16:48.178 ' 00:16:48.178 14:57:21 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:16:48.178 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:48.178 --rc genhtml_branch_coverage=1 00:16:48.178 --rc genhtml_function_coverage=1 00:16:48.178 --rc genhtml_legend=1 00:16:48.178 --rc geninfo_all_blocks=1 00:16:48.178 --rc geninfo_unexecuted_blocks=1 00:16:48.178 00:16:48.178 ' 00:16:48.178 14:57:21 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:16:48.178 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:48.178 --rc genhtml_branch_coverage=1 00:16:48.178 --rc genhtml_function_coverage=1 00:16:48.178 --rc genhtml_legend=1 00:16:48.178 --rc geninfo_all_blocks=1 00:16:48.178 --rc geninfo_unexecuted_blocks=1 00:16:48.178 00:16:48.178 ' 00:16:48.178 14:57:21 -- target/tls.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:16:48.178 14:57:21 -- nvmf/common.sh@7 -- # uname -s 00:16:48.178 14:57:21 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:48.178 14:57:21 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:48.178 14:57:21 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:48.178 14:57:21 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:48.178 14:57:21 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:48.178 14:57:21 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:48.178 14:57:21 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:48.178 14:57:21 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:48.178 14:57:21 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:48.178 14:57:21 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:48.178 14:57:21 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:2d843004-a791-47f3-8dd7-3d04462c368b 00:16:48.178 14:57:21 -- nvmf/common.sh@18 -- # NVME_HOSTID=2d843004-a791-47f3-8dd7-3d04462c368b 00:16:48.178 14:57:21 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:48.178 14:57:21 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:48.178 14:57:21 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:16:48.178 14:57:21 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:16:48.178 14:57:21 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:48.178 14:57:21 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:48.178 14:57:21 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:48.178 14:57:21 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:48.178 14:57:21 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:48.178 14:57:21 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:48.178 14:57:21 -- paths/export.sh@5 -- # export PATH 00:16:48.178 14:57:21 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:48.178 14:57:21 -- nvmf/common.sh@46 -- # : 0 00:16:48.178 14:57:21 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:16:48.178 14:57:21 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:16:48.178 14:57:21 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:16:48.178 14:57:21 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:48.178 14:57:21 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:48.178 14:57:21 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:16:48.178 14:57:21 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:16:48.178 14:57:21 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:16:48.178 14:57:21 -- target/tls.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:16:48.178 14:57:21 -- target/tls.sh@71 -- # nvmftestinit 00:16:48.178 14:57:21 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:16:48.178 14:57:21 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:48.178 14:57:21 -- nvmf/common.sh@436 -- # prepare_net_devs 00:16:48.178 14:57:21 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:16:48.178 14:57:21 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:16:48.178 14:57:21 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:48.178 14:57:21 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:48.178 14:57:21 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:48.178 14:57:21 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:16:48.178 14:57:21 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:16:48.178 14:57:21 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:16:48.178 14:57:21 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:16:48.178 14:57:21 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:16:48.178 14:57:21 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:16:48.178 14:57:21 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:48.178 14:57:21 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:48.178 14:57:21 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:16:48.178 14:57:21 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:16:48.178 14:57:21 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:16:48.178 14:57:21 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:16:48.178 14:57:21 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:16:48.178 14:57:21 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:48.178 14:57:21 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:16:48.178 14:57:21 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:16:48.178 14:57:21 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:16:48.178 14:57:21 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:16:48.178 14:57:21 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:16:48.178 14:57:21 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:16:48.178 Cannot find device "nvmf_tgt_br" 00:16:48.178 14:57:21 -- nvmf/common.sh@154 -- # true 00:16:48.178 14:57:21 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:16:48.437 Cannot find device "nvmf_tgt_br2" 00:16:48.437 14:57:21 -- nvmf/common.sh@155 -- # true 00:16:48.437 14:57:21 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:16:48.437 14:57:21 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:16:48.437 Cannot find device "nvmf_tgt_br" 00:16:48.437 14:57:21 -- nvmf/common.sh@157 -- # true 00:16:48.437 14:57:21 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:16:48.437 Cannot find device "nvmf_tgt_br2" 00:16:48.437 14:57:21 -- nvmf/common.sh@158 -- # true 00:16:48.437 14:57:21 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:16:48.437 14:57:21 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:16:48.437 14:57:21 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:16:48.437 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:48.437 14:57:21 -- nvmf/common.sh@161 -- # true 00:16:48.437 14:57:21 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:16:48.437 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:48.437 14:57:21 -- nvmf/common.sh@162 -- # true 00:16:48.437 14:57:21 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:16:48.437 14:57:21 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:16:48.437 14:57:21 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:16:48.437 14:57:21 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:16:48.437 14:57:21 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:16:48.437 14:57:21 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:16:48.437 14:57:21 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:16:48.437 14:57:21 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:16:48.437 14:57:21 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:16:48.437 14:57:21 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:16:48.437 14:57:21 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:16:48.437 14:57:21 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:16:48.437 14:57:21 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:16:48.437 14:57:21 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:16:48.437 14:57:21 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:16:48.437 14:57:21 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:16:48.437 14:57:21 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:16:48.437 14:57:21 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:16:48.437 14:57:21 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:16:48.696 14:57:21 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:16:48.696 14:57:21 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:16:48.696 14:57:21 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:16:48.696 14:57:21 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:16:48.696 14:57:21 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:16:48.696 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:48.696 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.067 ms 00:16:48.696 00:16:48.696 --- 10.0.0.2 ping statistics --- 00:16:48.696 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:48.696 rtt min/avg/max/mdev = 0.067/0.067/0.067/0.000 ms 00:16:48.696 14:57:21 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:16:48.696 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:16:48.696 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.055 ms 00:16:48.696 00:16:48.696 --- 10.0.0.3 ping statistics --- 00:16:48.696 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:48.696 rtt min/avg/max/mdev = 0.055/0.055/0.055/0.000 ms 00:16:48.696 14:57:21 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:16:48.696 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:48.696 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.031 ms 00:16:48.696 00:16:48.696 --- 10.0.0.1 ping statistics --- 00:16:48.696 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:48.696 rtt min/avg/max/mdev = 0.031/0.031/0.031/0.000 ms 00:16:48.696 14:57:21 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:48.696 14:57:21 -- nvmf/common.sh@421 -- # return 0 00:16:48.696 14:57:21 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:16:48.696 14:57:21 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:48.696 14:57:21 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:16:48.696 14:57:21 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:16:48.696 14:57:21 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:48.696 14:57:21 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:16:48.696 14:57:21 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:16:48.696 14:57:21 -- target/tls.sh@72 -- # nvmfappstart -m 0x2 --wait-for-rpc 00:16:48.696 14:57:21 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:16:48.696 14:57:21 -- common/autotest_common.sh@722 -- # xtrace_disable 00:16:48.696 14:57:21 -- common/autotest_common.sh@10 -- # set +x 00:16:48.696 14:57:21 -- nvmf/common.sh@469 -- # nvmfpid=88382 00:16:48.696 14:57:21 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 --wait-for-rpc 00:16:48.696 14:57:21 -- nvmf/common.sh@470 -- # waitforlisten 88382 00:16:48.696 14:57:21 -- common/autotest_common.sh@829 -- # '[' -z 88382 ']' 00:16:48.696 14:57:21 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:48.696 14:57:21 -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:48.696 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:48.696 14:57:21 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:48.696 14:57:21 -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:48.696 14:57:21 -- common/autotest_common.sh@10 -- # set +x 00:16:48.696 [2024-12-01 14:57:21.695235] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:16:48.696 [2024-12-01 14:57:21.695314] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:48.955 [2024-12-01 14:57:21.836640] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:48.955 [2024-12-01 14:57:21.921327] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:16:48.955 [2024-12-01 14:57:21.921528] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:48.955 [2024-12-01 14:57:21.921546] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:48.955 [2024-12-01 14:57:21.921558] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:48.955 [2024-12-01 14:57:21.921602] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:16:49.889 14:57:22 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:49.889 14:57:22 -- common/autotest_common.sh@862 -- # return 0 00:16:49.889 14:57:22 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:16:49.889 14:57:22 -- common/autotest_common.sh@728 -- # xtrace_disable 00:16:49.889 14:57:22 -- common/autotest_common.sh@10 -- # set +x 00:16:49.889 14:57:22 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:49.889 14:57:22 -- target/tls.sh@74 -- # '[' tcp '!=' tcp ']' 00:16:49.889 14:57:22 -- target/tls.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_set_default_impl -i ssl 00:16:49.889 true 00:16:49.889 14:57:22 -- target/tls.sh@82 -- # jq -r .tls_version 00:16:49.889 14:57:22 -- target/tls.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:16:50.455 14:57:23 -- target/tls.sh@82 -- # version=0 00:16:50.455 14:57:23 -- target/tls.sh@83 -- # [[ 0 != \0 ]] 00:16:50.455 14:57:23 -- target/tls.sh@89 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:16:50.455 14:57:23 -- target/tls.sh@90 -- # jq -r .tls_version 00:16:50.455 14:57:23 -- target/tls.sh@90 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:16:50.713 14:57:23 -- target/tls.sh@90 -- # version=13 00:16:50.713 14:57:23 -- target/tls.sh@91 -- # [[ 13 != \1\3 ]] 00:16:50.713 14:57:23 -- target/tls.sh@97 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 7 00:16:50.972 14:57:23 -- target/tls.sh@98 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:16:50.972 14:57:23 -- target/tls.sh@98 -- # jq -r .tls_version 00:16:51.231 14:57:24 -- target/tls.sh@98 -- # version=7 00:16:51.231 14:57:24 -- target/tls.sh@99 -- # [[ 7 != \7 ]] 00:16:51.231 14:57:24 -- target/tls.sh@105 -- # jq -r .enable_ktls 00:16:51.231 14:57:24 -- target/tls.sh@105 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:16:51.490 14:57:24 -- target/tls.sh@105 -- # ktls=false 00:16:51.490 14:57:24 -- target/tls.sh@106 -- # [[ false != \f\a\l\s\e ]] 00:16:51.490 14:57:24 -- target/tls.sh@112 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --enable-ktls 00:16:51.747 14:57:24 -- target/tls.sh@113 -- # jq -r .enable_ktls 00:16:51.748 14:57:24 -- target/tls.sh@113 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:16:52.006 14:57:25 -- target/tls.sh@113 -- # ktls=true 00:16:52.006 14:57:25 -- target/tls.sh@114 -- # [[ true != \t\r\u\e ]] 00:16:52.006 14:57:25 -- target/tls.sh@120 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --disable-ktls 00:16:52.264 14:57:25 -- target/tls.sh@121 -- # jq -r .enable_ktls 00:16:52.264 14:57:25 -- target/tls.sh@121 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:16:52.522 14:57:25 -- target/tls.sh@121 -- # ktls=false 00:16:52.522 14:57:25 -- target/tls.sh@122 -- # [[ false != \f\a\l\s\e ]] 00:16:52.522 14:57:25 -- target/tls.sh@127 -- # format_interchange_psk 00112233445566778899aabbccddeeff 00:16:52.522 14:57:25 -- target/tls.sh@49 -- # local key hash crc 00:16:52.522 14:57:25 -- target/tls.sh@51 -- # key=00112233445566778899aabbccddeeff 00:16:52.522 14:57:25 -- target/tls.sh@51 -- # hash=01 00:16:52.522 14:57:25 -- target/tls.sh@52 -- # gzip -1 -c 00:16:52.522 14:57:25 -- target/tls.sh@52 -- # echo -n 00112233445566778899aabbccddeeff 00:16:52.522 14:57:25 -- target/tls.sh@52 -- # tail -c8 00:16:52.522 14:57:25 -- target/tls.sh@52 -- # head -c 4 00:16:52.522 14:57:25 -- target/tls.sh@52 -- # crc='p$H�' 00:16:52.522 14:57:25 -- target/tls.sh@54 -- # base64 /dev/fd/62 00:16:52.522 14:57:25 -- target/tls.sh@54 -- # echo -n '00112233445566778899aabbccddeeffp$H�' 00:16:52.522 14:57:25 -- target/tls.sh@54 -- # echo NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:16:52.522 14:57:25 -- target/tls.sh@127 -- # key=NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:16:52.522 14:57:25 -- target/tls.sh@128 -- # format_interchange_psk ffeeddccbbaa99887766554433221100 00:16:52.522 14:57:25 -- target/tls.sh@49 -- # local key hash crc 00:16:52.522 14:57:25 -- target/tls.sh@51 -- # key=ffeeddccbbaa99887766554433221100 00:16:52.522 14:57:25 -- target/tls.sh@51 -- # hash=01 00:16:52.522 14:57:25 -- target/tls.sh@52 -- # echo -n ffeeddccbbaa99887766554433221100 00:16:52.522 14:57:25 -- target/tls.sh@52 -- # gzip -1 -c 00:16:52.522 14:57:25 -- target/tls.sh@52 -- # head -c 4 00:16:52.522 14:57:25 -- target/tls.sh@52 -- # tail -c8 00:16:52.522 14:57:25 -- target/tls.sh@52 -- # crc=$'_\006o\330' 00:16:52.522 14:57:25 -- target/tls.sh@54 -- # base64 /dev/fd/62 00:16:52.522 14:57:25 -- target/tls.sh@54 -- # echo -n $'ffeeddccbbaa99887766554433221100_\006o\330' 00:16:52.522 14:57:25 -- target/tls.sh@54 -- # echo NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:16:52.522 14:57:25 -- target/tls.sh@128 -- # key_2=NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:16:52.522 14:57:25 -- target/tls.sh@130 -- # key_path=/home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:16:52.522 14:57:25 -- target/tls.sh@131 -- # key_2_path=/home/vagrant/spdk_repo/spdk/test/nvmf/target/key2.txt 00:16:52.522 14:57:25 -- target/tls.sh@133 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:16:52.522 14:57:25 -- target/tls.sh@134 -- # echo -n NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:16:52.522 14:57:25 -- target/tls.sh@136 -- # chmod 0600 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:16:52.522 14:57:25 -- target/tls.sh@137 -- # chmod 0600 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key2.txt 00:16:52.522 14:57:25 -- target/tls.sh@139 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:16:52.781 14:57:25 -- target/tls.sh@140 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py framework_start_init 00:16:53.039 14:57:26 -- target/tls.sh@142 -- # setup_nvmf_tgt /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:16:53.039 14:57:26 -- target/tls.sh@58 -- # local key=/home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:16:53.039 14:57:26 -- target/tls.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:16:53.298 [2024-12-01 14:57:26.331647] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:53.298 14:57:26 -- target/tls.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:16:53.556 14:57:26 -- target/tls.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:16:53.815 [2024-12-01 14:57:26.727687] tcp.c: 914:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:16:53.815 [2024-12-01 14:57:26.727976] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:53.815 14:57:26 -- target/tls.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:16:54.074 malloc0 00:16:54.074 14:57:27 -- target/tls.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:16:54.333 14:57:27 -- target/tls.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:16:54.592 14:57:27 -- target/tls.sh@146 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -S ssl -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 hostnqn:nqn.2016-06.io.spdk:host1' --psk-path /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:17:04.568 Initializing NVMe Controllers 00:17:04.568 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:17:04.568 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:17:04.568 Initialization complete. Launching workers. 00:17:04.568 ======================================================== 00:17:04.568 Latency(us) 00:17:04.568 Device Information : IOPS MiB/s Average min max 00:17:04.569 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 11932.80 46.61 5364.28 1439.04 14013.91 00:17:04.569 ======================================================== 00:17:04.569 Total : 11932.80 46.61 5364.28 1439.04 14013.91 00:17:04.569 00:17:04.828 14:57:37 -- target/tls.sh@152 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:17:04.828 14:57:37 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:17:04.828 14:57:37 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:17:04.828 14:57:37 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:17:04.828 14:57:37 -- target/tls.sh@23 -- # psk='--psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt' 00:17:04.828 14:57:37 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:17:04.828 14:57:37 -- target/tls.sh@28 -- # bdevperf_pid=88754 00:17:04.828 14:57:37 -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:17:04.828 14:57:37 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:17:04.828 14:57:37 -- target/tls.sh@31 -- # waitforlisten 88754 /var/tmp/bdevperf.sock 00:17:04.828 14:57:37 -- common/autotest_common.sh@829 -- # '[' -z 88754 ']' 00:17:04.828 14:57:37 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:04.828 14:57:37 -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:04.828 14:57:37 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:04.828 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:04.828 14:57:37 -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:04.828 14:57:37 -- common/autotest_common.sh@10 -- # set +x 00:17:04.828 [2024-12-01 14:57:37.734876] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:17:04.828 [2024-12-01 14:57:37.735327] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid88754 ] 00:17:04.828 [2024-12-01 14:57:37.884648] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:05.086 [2024-12-01 14:57:37.973987] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:17:05.653 14:57:38 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:05.653 14:57:38 -- common/autotest_common.sh@862 -- # return 0 00:17:05.653 14:57:38 -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:17:05.912 [2024-12-01 14:57:38.950148] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:17:05.912 TLSTESTn1 00:17:06.171 14:57:39 -- target/tls.sh@41 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:17:06.171 Running I/O for 10 seconds... 00:17:16.151 00:17:16.151 Latency(us) 00:17:16.151 [2024-12-01T14:57:49.267Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:16.152 [2024-12-01T14:57:49.267Z] Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:17:16.152 Verification LBA range: start 0x0 length 0x2000 00:17:16.152 TLSTESTn1 : 10.02 6301.53 24.62 0.00 0.00 20278.62 5957.82 20733.21 00:17:16.152 [2024-12-01T14:57:49.267Z] =================================================================================================================== 00:17:16.152 [2024-12-01T14:57:49.267Z] Total : 6301.53 24.62 0.00 0.00 20278.62 5957.82 20733.21 00:17:16.152 0 00:17:16.152 14:57:49 -- target/tls.sh@44 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:17:16.152 14:57:49 -- target/tls.sh@45 -- # killprocess 88754 00:17:16.152 14:57:49 -- common/autotest_common.sh@936 -- # '[' -z 88754 ']' 00:17:16.152 14:57:49 -- common/autotest_common.sh@940 -- # kill -0 88754 00:17:16.152 14:57:49 -- common/autotest_common.sh@941 -- # uname 00:17:16.152 14:57:49 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:17:16.152 14:57:49 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 88754 00:17:16.152 killing process with pid 88754 00:17:16.152 Received shutdown signal, test time was about 10.000000 seconds 00:17:16.152 00:17:16.152 Latency(us) 00:17:16.152 [2024-12-01T14:57:49.267Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:16.152 [2024-12-01T14:57:49.267Z] =================================================================================================================== 00:17:16.152 [2024-12-01T14:57:49.267Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:17:16.152 14:57:49 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:17:16.152 14:57:49 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:17:16.152 14:57:49 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 88754' 00:17:16.152 14:57:49 -- common/autotest_common.sh@955 -- # kill 88754 00:17:16.152 14:57:49 -- common/autotest_common.sh@960 -- # wait 88754 00:17:16.410 14:57:49 -- target/tls.sh@155 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key2.txt 00:17:16.410 14:57:49 -- common/autotest_common.sh@650 -- # local es=0 00:17:16.410 14:57:49 -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key2.txt 00:17:16.410 14:57:49 -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:17:16.410 14:57:49 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:16.410 14:57:49 -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:17:16.410 14:57:49 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:16.410 14:57:49 -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key2.txt 00:17:16.410 14:57:49 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:17:16.410 14:57:49 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:17:16.410 14:57:49 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:17:16.410 14:57:49 -- target/tls.sh@23 -- # psk='--psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key2.txt' 00:17:16.410 14:57:49 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:17:16.410 14:57:49 -- target/tls.sh@28 -- # bdevperf_pid=88908 00:17:16.410 14:57:49 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:17:16.410 14:57:49 -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:17:16.410 14:57:49 -- target/tls.sh@31 -- # waitforlisten 88908 /var/tmp/bdevperf.sock 00:17:16.410 14:57:49 -- common/autotest_common.sh@829 -- # '[' -z 88908 ']' 00:17:16.410 14:57:49 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:16.410 14:57:49 -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:16.410 14:57:49 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:16.410 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:16.410 14:57:49 -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:16.410 14:57:49 -- common/autotest_common.sh@10 -- # set +x 00:17:16.410 [2024-12-01 14:57:49.448064] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:17:16.410 [2024-12-01 14:57:49.448177] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid88908 ] 00:17:16.669 [2024-12-01 14:57:49.586934] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:16.669 [2024-12-01 14:57:49.632134] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:17:17.603 14:57:50 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:17.603 14:57:50 -- common/autotest_common.sh@862 -- # return 0 00:17:17.603 14:57:50 -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key2.txt 00:17:17.603 [2024-12-01 14:57:50.655173] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:17:17.603 [2024-12-01 14:57:50.661982] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:17:17.603 [2024-12-01 14:57:50.662268] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfaccc0 (107): Transport endpoint is not connected 00:17:17.603 [2024-12-01 14:57:50.663248] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfaccc0 (9): Bad file descriptor 00:17:17.603 [2024-12-01 14:57:50.664242] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:17:17.603 [2024-12-01 14:57:50.664259] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:17:17.603 [2024-12-01 14:57:50.664278] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:17:17.603 2024/12/01 14:57:50 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 hostnqn:nqn.2016-06.io.spdk:host1 name:TLSTEST psk:/home/vagrant/spdk_repo/spdk/test/nvmf/target/key2.txt subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-32602 Msg=Invalid parameters 00:17:17.603 request: 00:17:17.603 { 00:17:17.603 "method": "bdev_nvme_attach_controller", 00:17:17.603 "params": { 00:17:17.603 "name": "TLSTEST", 00:17:17.603 "trtype": "tcp", 00:17:17.603 "traddr": "10.0.0.2", 00:17:17.603 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:17:17.603 "adrfam": "ipv4", 00:17:17.603 "trsvcid": "4420", 00:17:17.603 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:17.603 "psk": "/home/vagrant/spdk_repo/spdk/test/nvmf/target/key2.txt" 00:17:17.603 } 00:17:17.603 } 00:17:17.603 Got JSON-RPC error response 00:17:17.603 GoRPCClient: error on JSON-RPC call 00:17:17.603 14:57:50 -- target/tls.sh@36 -- # killprocess 88908 00:17:17.603 14:57:50 -- common/autotest_common.sh@936 -- # '[' -z 88908 ']' 00:17:17.603 14:57:50 -- common/autotest_common.sh@940 -- # kill -0 88908 00:17:17.603 14:57:50 -- common/autotest_common.sh@941 -- # uname 00:17:17.603 14:57:50 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:17:17.603 14:57:50 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 88908 00:17:17.862 killing process with pid 88908 00:17:17.862 Received shutdown signal, test time was about 10.000000 seconds 00:17:17.862 00:17:17.862 Latency(us) 00:17:17.862 [2024-12-01T14:57:50.977Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:17.862 [2024-12-01T14:57:50.977Z] =================================================================================================================== 00:17:17.862 [2024-12-01T14:57:50.977Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:17:17.862 14:57:50 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:17:17.862 14:57:50 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:17:17.862 14:57:50 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 88908' 00:17:17.862 14:57:50 -- common/autotest_common.sh@955 -- # kill 88908 00:17:17.862 14:57:50 -- common/autotest_common.sh@960 -- # wait 88908 00:17:17.862 14:57:50 -- target/tls.sh@37 -- # return 1 00:17:17.862 14:57:50 -- common/autotest_common.sh@653 -- # es=1 00:17:17.862 14:57:50 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:17:17.862 14:57:50 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:17:17.862 14:57:50 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:17:17.862 14:57:50 -- target/tls.sh@158 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:17:17.862 14:57:50 -- common/autotest_common.sh@650 -- # local es=0 00:17:17.862 14:57:50 -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:17:17.862 14:57:50 -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:17:17.862 14:57:50 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:17.862 14:57:50 -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:17:17.862 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:17.862 14:57:50 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:17.862 14:57:50 -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:17:17.862 14:57:50 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:17:17.862 14:57:50 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:17:17.862 14:57:50 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host2 00:17:17.862 14:57:50 -- target/tls.sh@23 -- # psk='--psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt' 00:17:17.862 14:57:50 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:17:17.862 14:57:50 -- target/tls.sh@28 -- # bdevperf_pid=88949 00:17:17.862 14:57:50 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:17:17.862 14:57:50 -- target/tls.sh@31 -- # waitforlisten 88949 /var/tmp/bdevperf.sock 00:17:17.862 14:57:50 -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:17:17.862 14:57:50 -- common/autotest_common.sh@829 -- # '[' -z 88949 ']' 00:17:17.862 14:57:50 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:17.862 14:57:50 -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:17.862 14:57:50 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:17.862 14:57:50 -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:17.862 14:57:50 -- common/autotest_common.sh@10 -- # set +x 00:17:17.862 [2024-12-01 14:57:50.936015] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:17:17.862 [2024-12-01 14:57:50.936349] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid88949 ] 00:17:18.130 [2024-12-01 14:57:51.065720] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:18.130 [2024-12-01 14:57:51.116266] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:17:19.112 14:57:51 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:19.112 14:57:51 -- common/autotest_common.sh@862 -- # return 0 00:17:19.112 14:57:51 -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 --psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:17:19.112 [2024-12-01 14:57:52.131599] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:17:19.112 [2024-12-01 14:57:52.141953] tcp.c: 868:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:17:19.112 [2024-12-01 14:57:52.142014] posix.c: 583:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:17:19.112 [2024-12-01 14:57:52.142087] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:17:19.112 [2024-12-01 14:57:52.142742] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1243cc0 (107): Transport endpoint is not connected 00:17:19.112 [2024-12-01 14:57:52.143730] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1243cc0 (9): Bad file descriptor 00:17:19.112 [2024-12-01 14:57:52.144727] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:17:19.112 [2024-12-01 14:57:52.144746] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:17:19.112 [2024-12-01 14:57:52.144762] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:17:19.112 2024/12/01 14:57:52 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 hostnqn:nqn.2016-06.io.spdk:host2 name:TLSTEST psk:/home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-32602 Msg=Invalid parameters 00:17:19.112 request: 00:17:19.112 { 00:17:19.112 "method": "bdev_nvme_attach_controller", 00:17:19.112 "params": { 00:17:19.112 "name": "TLSTEST", 00:17:19.112 "trtype": "tcp", 00:17:19.112 "traddr": "10.0.0.2", 00:17:19.112 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:17:19.112 "adrfam": "ipv4", 00:17:19.112 "trsvcid": "4420", 00:17:19.112 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:19.112 "psk": "/home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt" 00:17:19.112 } 00:17:19.112 } 00:17:19.112 Got JSON-RPC error response 00:17:19.112 GoRPCClient: error on JSON-RPC call 00:17:19.112 14:57:52 -- target/tls.sh@36 -- # killprocess 88949 00:17:19.112 14:57:52 -- common/autotest_common.sh@936 -- # '[' -z 88949 ']' 00:17:19.112 14:57:52 -- common/autotest_common.sh@940 -- # kill -0 88949 00:17:19.112 14:57:52 -- common/autotest_common.sh@941 -- # uname 00:17:19.112 14:57:52 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:17:19.112 14:57:52 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 88949 00:17:19.112 killing process with pid 88949 00:17:19.112 Received shutdown signal, test time was about 10.000000 seconds 00:17:19.112 00:17:19.112 Latency(us) 00:17:19.112 [2024-12-01T14:57:52.227Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:19.112 [2024-12-01T14:57:52.227Z] =================================================================================================================== 00:17:19.112 [2024-12-01T14:57:52.227Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:17:19.112 14:57:52 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:17:19.112 14:57:52 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:17:19.112 14:57:52 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 88949' 00:17:19.112 14:57:52 -- common/autotest_common.sh@955 -- # kill 88949 00:17:19.112 14:57:52 -- common/autotest_common.sh@960 -- # wait 88949 00:17:19.371 14:57:52 -- target/tls.sh@37 -- # return 1 00:17:19.371 14:57:52 -- common/autotest_common.sh@653 -- # es=1 00:17:19.371 14:57:52 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:17:19.371 14:57:52 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:17:19.371 14:57:52 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:17:19.371 14:57:52 -- target/tls.sh@161 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:17:19.371 14:57:52 -- common/autotest_common.sh@650 -- # local es=0 00:17:19.371 14:57:52 -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:17:19.371 14:57:52 -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:17:19.371 14:57:52 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:19.371 14:57:52 -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:17:19.371 14:57:52 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:19.371 14:57:52 -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:17:19.371 14:57:52 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:17:19.371 14:57:52 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode2 00:17:19.371 14:57:52 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:17:19.371 14:57:52 -- target/tls.sh@23 -- # psk='--psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt' 00:17:19.371 14:57:52 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:17:19.371 14:57:52 -- target/tls.sh@28 -- # bdevperf_pid=88989 00:17:19.371 14:57:52 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:17:19.371 14:57:52 -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:17:19.371 14:57:52 -- target/tls.sh@31 -- # waitforlisten 88989 /var/tmp/bdevperf.sock 00:17:19.371 14:57:52 -- common/autotest_common.sh@829 -- # '[' -z 88989 ']' 00:17:19.371 14:57:52 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:19.371 14:57:52 -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:19.371 14:57:52 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:19.371 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:19.371 14:57:52 -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:19.371 14:57:52 -- common/autotest_common.sh@10 -- # set +x 00:17:19.371 [2024-12-01 14:57:52.433089] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:17:19.371 [2024-12-01 14:57:52.433378] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid88989 ] 00:17:19.630 [2024-12-01 14:57:52.573963] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:19.630 [2024-12-01 14:57:52.620768] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:17:20.565 14:57:53 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:20.565 14:57:53 -- common/autotest_common.sh@862 -- # return 0 00:17:20.565 14:57:53 -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -q nqn.2016-06.io.spdk:host1 --psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:17:20.565 [2024-12-01 14:57:53.596016] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:17:20.565 [2024-12-01 14:57:53.602216] tcp.c: 868:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:17:20.565 [2024-12-01 14:57:53.602275] posix.c: 583:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:17:20.565 [2024-12-01 14:57:53.602360] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:17:20.565 [2024-12-01 14:57:53.603158] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1167cc0 (107): Transport endpoint is not connected 00:17:20.565 [2024-12-01 14:57:53.604147] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1167cc0 (9): Bad file descriptor 00:17:20.565 [2024-12-01 14:57:53.605154] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:17:20.565 [2024-12-01 14:57:53.605184] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:17:20.565 [2024-12-01 14:57:53.605193] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:17:20.565 2024/12/01 14:57:53 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 hostnqn:nqn.2016-06.io.spdk:host1 name:TLSTEST psk:/home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt subnqn:nqn.2016-06.io.spdk:cnode2 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-32602 Msg=Invalid parameters 00:17:20.565 request: 00:17:20.565 { 00:17:20.565 "method": "bdev_nvme_attach_controller", 00:17:20.565 "params": { 00:17:20.565 "name": "TLSTEST", 00:17:20.565 "trtype": "tcp", 00:17:20.565 "traddr": "10.0.0.2", 00:17:20.565 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:17:20.565 "adrfam": "ipv4", 00:17:20.565 "trsvcid": "4420", 00:17:20.565 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:17:20.565 "psk": "/home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt" 00:17:20.565 } 00:17:20.565 } 00:17:20.565 Got JSON-RPC error response 00:17:20.565 GoRPCClient: error on JSON-RPC call 00:17:20.566 14:57:53 -- target/tls.sh@36 -- # killprocess 88989 00:17:20.566 14:57:53 -- common/autotest_common.sh@936 -- # '[' -z 88989 ']' 00:17:20.566 14:57:53 -- common/autotest_common.sh@940 -- # kill -0 88989 00:17:20.566 14:57:53 -- common/autotest_common.sh@941 -- # uname 00:17:20.566 14:57:53 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:17:20.566 14:57:53 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 88989 00:17:20.566 killing process with pid 88989 00:17:20.566 Received shutdown signal, test time was about 10.000000 seconds 00:17:20.566 00:17:20.566 Latency(us) 00:17:20.566 [2024-12-01T14:57:53.681Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:20.566 [2024-12-01T14:57:53.681Z] =================================================================================================================== 00:17:20.566 [2024-12-01T14:57:53.681Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:17:20.566 14:57:53 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:17:20.566 14:57:53 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:17:20.566 14:57:53 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 88989' 00:17:20.566 14:57:53 -- common/autotest_common.sh@955 -- # kill 88989 00:17:20.566 14:57:53 -- common/autotest_common.sh@960 -- # wait 88989 00:17:20.823 14:57:53 -- target/tls.sh@37 -- # return 1 00:17:20.823 14:57:53 -- common/autotest_common.sh@653 -- # es=1 00:17:20.823 14:57:53 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:17:20.823 14:57:53 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:17:20.823 14:57:53 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:17:20.823 14:57:53 -- target/tls.sh@164 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:17:20.823 14:57:53 -- common/autotest_common.sh@650 -- # local es=0 00:17:20.823 14:57:53 -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:17:20.823 14:57:53 -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:17:20.823 14:57:53 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:20.823 14:57:53 -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:17:20.823 14:57:53 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:20.823 14:57:53 -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:17:20.823 14:57:53 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:17:20.823 14:57:53 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:17:20.823 14:57:53 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:17:20.823 14:57:53 -- target/tls.sh@23 -- # psk= 00:17:20.823 14:57:53 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:17:20.823 14:57:53 -- target/tls.sh@28 -- # bdevperf_pid=89040 00:17:20.823 14:57:53 -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:17:20.823 14:57:53 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:17:20.823 14:57:53 -- target/tls.sh@31 -- # waitforlisten 89040 /var/tmp/bdevperf.sock 00:17:20.823 14:57:53 -- common/autotest_common.sh@829 -- # '[' -z 89040 ']' 00:17:20.823 14:57:53 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:20.823 14:57:53 -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:20.823 14:57:53 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:20.823 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:20.823 14:57:53 -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:20.823 14:57:53 -- common/autotest_common.sh@10 -- # set +x 00:17:20.823 [2024-12-01 14:57:53.870546] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:17:20.823 [2024-12-01 14:57:53.870894] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid89040 ] 00:17:21.080 [2024-12-01 14:57:54.004188] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:21.080 [2024-12-01 14:57:54.054964] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:17:22.012 14:57:54 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:22.012 14:57:54 -- common/autotest_common.sh@862 -- # return 0 00:17:22.012 14:57:54 -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:17:22.012 [2024-12-01 14:57:55.032196] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:17:22.012 [2024-12-01 14:57:55.034065] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a978c0 (9): Bad file descriptor 00:17:22.012 [2024-12-01 14:57:55.035057] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:17:22.012 [2024-12-01 14:57:55.035080] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:17:22.012 [2024-12-01 14:57:55.035094] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:17:22.012 2024/12/01 14:57:55 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 hostnqn:nqn.2016-06.io.spdk:host1 name:TLSTEST subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-32602 Msg=Invalid parameters 00:17:22.012 request: 00:17:22.012 { 00:17:22.012 "method": "bdev_nvme_attach_controller", 00:17:22.012 "params": { 00:17:22.012 "name": "TLSTEST", 00:17:22.012 "trtype": "tcp", 00:17:22.012 "traddr": "10.0.0.2", 00:17:22.012 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:17:22.012 "adrfam": "ipv4", 00:17:22.012 "trsvcid": "4420", 00:17:22.012 "subnqn": "nqn.2016-06.io.spdk:cnode1" 00:17:22.012 } 00:17:22.012 } 00:17:22.012 Got JSON-RPC error response 00:17:22.012 GoRPCClient: error on JSON-RPC call 00:17:22.012 14:57:55 -- target/tls.sh@36 -- # killprocess 89040 00:17:22.012 14:57:55 -- common/autotest_common.sh@936 -- # '[' -z 89040 ']' 00:17:22.012 14:57:55 -- common/autotest_common.sh@940 -- # kill -0 89040 00:17:22.012 14:57:55 -- common/autotest_common.sh@941 -- # uname 00:17:22.012 14:57:55 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:17:22.012 14:57:55 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 89040 00:17:22.012 killing process with pid 89040 00:17:22.012 Received shutdown signal, test time was about 10.000000 seconds 00:17:22.012 00:17:22.012 Latency(us) 00:17:22.012 [2024-12-01T14:57:55.127Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:22.012 [2024-12-01T14:57:55.127Z] =================================================================================================================== 00:17:22.012 [2024-12-01T14:57:55.127Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:17:22.012 14:57:55 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:17:22.012 14:57:55 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:17:22.012 14:57:55 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 89040' 00:17:22.012 14:57:55 -- common/autotest_common.sh@955 -- # kill 89040 00:17:22.012 14:57:55 -- common/autotest_common.sh@960 -- # wait 89040 00:17:22.270 14:57:55 -- target/tls.sh@37 -- # return 1 00:17:22.270 14:57:55 -- common/autotest_common.sh@653 -- # es=1 00:17:22.270 14:57:55 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:17:22.270 14:57:55 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:17:22.270 14:57:55 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:17:22.270 14:57:55 -- target/tls.sh@167 -- # killprocess 88382 00:17:22.270 14:57:55 -- common/autotest_common.sh@936 -- # '[' -z 88382 ']' 00:17:22.270 14:57:55 -- common/autotest_common.sh@940 -- # kill -0 88382 00:17:22.270 14:57:55 -- common/autotest_common.sh@941 -- # uname 00:17:22.270 14:57:55 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:17:22.270 14:57:55 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 88382 00:17:22.270 killing process with pid 88382 00:17:22.270 14:57:55 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:17:22.270 14:57:55 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:17:22.270 14:57:55 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 88382' 00:17:22.270 14:57:55 -- common/autotest_common.sh@955 -- # kill 88382 00:17:22.270 14:57:55 -- common/autotest_common.sh@960 -- # wait 88382 00:17:22.526 14:57:55 -- target/tls.sh@168 -- # format_interchange_psk 00112233445566778899aabbccddeeff0011223344556677 02 00:17:22.526 14:57:55 -- target/tls.sh@49 -- # local key hash crc 00:17:22.526 14:57:55 -- target/tls.sh@51 -- # key=00112233445566778899aabbccddeeff0011223344556677 00:17:22.526 14:57:55 -- target/tls.sh@51 -- # hash=02 00:17:22.526 14:57:55 -- target/tls.sh@52 -- # echo -n 00112233445566778899aabbccddeeff0011223344556677 00:17:22.526 14:57:55 -- target/tls.sh@52 -- # gzip -1 -c 00:17:22.526 14:57:55 -- target/tls.sh@52 -- # tail -c8 00:17:22.526 14:57:55 -- target/tls.sh@52 -- # head -c 4 00:17:22.526 14:57:55 -- target/tls.sh@52 -- # crc='�e�'\''' 00:17:22.526 14:57:55 -- target/tls.sh@54 -- # echo -n '00112233445566778899aabbccddeeff0011223344556677�e�'\''' 00:17:22.526 14:57:55 -- target/tls.sh@54 -- # base64 /dev/fd/62 00:17:22.526 14:57:55 -- target/tls.sh@54 -- # echo NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:17:22.526 14:57:55 -- target/tls.sh@168 -- # key_long=NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:17:22.526 14:57:55 -- target/tls.sh@169 -- # key_long_path=/home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:17:22.526 14:57:55 -- target/tls.sh@170 -- # echo -n NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:17:22.526 14:57:55 -- target/tls.sh@171 -- # chmod 0600 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:17:22.526 14:57:55 -- target/tls.sh@172 -- # nvmfappstart -m 0x2 00:17:22.526 14:57:55 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:17:22.527 14:57:55 -- common/autotest_common.sh@722 -- # xtrace_disable 00:17:22.527 14:57:55 -- common/autotest_common.sh@10 -- # set +x 00:17:22.527 14:57:55 -- nvmf/common.sh@469 -- # nvmfpid=89101 00:17:22.527 14:57:55 -- nvmf/common.sh@470 -- # waitforlisten 89101 00:17:22.527 14:57:55 -- common/autotest_common.sh@829 -- # '[' -z 89101 ']' 00:17:22.527 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:22.527 14:57:55 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:22.527 14:57:55 -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:22.527 14:57:55 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:22.527 14:57:55 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:17:22.527 14:57:55 -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:22.527 14:57:55 -- common/autotest_common.sh@10 -- # set +x 00:17:22.784 [2024-12-01 14:57:55.652409] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:17:22.784 [2024-12-01 14:57:55.652480] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:22.784 [2024-12-01 14:57:55.782980] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:22.784 [2024-12-01 14:57:55.863056] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:17:22.784 [2024-12-01 14:57:55.863196] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:22.784 [2024-12-01 14:57:55.863208] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:22.784 [2024-12-01 14:57:55.863215] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:22.784 [2024-12-01 14:57:55.863242] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:17:23.715 14:57:56 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:23.715 14:57:56 -- common/autotest_common.sh@862 -- # return 0 00:17:23.715 14:57:56 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:17:23.715 14:57:56 -- common/autotest_common.sh@728 -- # xtrace_disable 00:17:23.715 14:57:56 -- common/autotest_common.sh@10 -- # set +x 00:17:23.715 14:57:56 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:23.715 14:57:56 -- target/tls.sh@174 -- # setup_nvmf_tgt /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:17:23.715 14:57:56 -- target/tls.sh@58 -- # local key=/home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:17:23.715 14:57:56 -- target/tls.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:17:23.974 [2024-12-01 14:57:56.895264] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:23.974 14:57:56 -- target/tls.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:17:24.232 14:57:57 -- target/tls.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:17:24.490 [2024-12-01 14:57:57.359395] tcp.c: 914:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:17:24.490 [2024-12-01 14:57:57.359699] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:24.491 14:57:57 -- target/tls.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:17:24.748 malloc0 00:17:24.748 14:57:57 -- target/tls.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:17:25.006 14:57:57 -- target/tls.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:17:25.265 14:57:58 -- target/tls.sh@176 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:17:25.265 14:57:58 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:17:25.265 14:57:58 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:17:25.265 14:57:58 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:17:25.265 14:57:58 -- target/tls.sh@23 -- # psk='--psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt' 00:17:25.265 14:57:58 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:17:25.265 14:57:58 -- target/tls.sh@28 -- # bdevperf_pid=89198 00:17:25.265 14:57:58 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:17:25.265 14:57:58 -- target/tls.sh@31 -- # waitforlisten 89198 /var/tmp/bdevperf.sock 00:17:25.265 14:57:58 -- common/autotest_common.sh@829 -- # '[' -z 89198 ']' 00:17:25.265 14:57:58 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:25.265 14:57:58 -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:25.265 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:25.265 14:57:58 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:25.265 14:57:58 -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:25.265 14:57:58 -- common/autotest_common.sh@10 -- # set +x 00:17:25.265 14:57:58 -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:17:25.265 [2024-12-01 14:57:58.172544] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:17:25.265 [2024-12-01 14:57:58.172625] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid89198 ] 00:17:25.265 [2024-12-01 14:57:58.308914] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:25.265 [2024-12-01 14:57:58.375066] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:17:26.202 14:57:59 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:26.202 14:57:59 -- common/autotest_common.sh@862 -- # return 0 00:17:26.202 14:57:59 -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:17:26.460 [2024-12-01 14:57:59.332105] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:17:26.460 TLSTESTn1 00:17:26.460 14:57:59 -- target/tls.sh@41 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:17:26.460 Running I/O for 10 seconds... 00:17:36.442 00:17:36.442 Latency(us) 00:17:36.442 [2024-12-01T14:58:09.557Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:36.442 [2024-12-01T14:58:09.557Z] Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:17:36.442 Verification LBA range: start 0x0 length 0x2000 00:17:36.442 TLSTESTn1 : 10.01 6328.35 24.72 0.00 0.00 20195.33 5093.93 21448.15 00:17:36.442 [2024-12-01T14:58:09.557Z] =================================================================================================================== 00:17:36.442 [2024-12-01T14:58:09.557Z] Total : 6328.35 24.72 0.00 0.00 20195.33 5093.93 21448.15 00:17:36.442 0 00:17:36.442 14:58:09 -- target/tls.sh@44 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:17:36.442 14:58:09 -- target/tls.sh@45 -- # killprocess 89198 00:17:36.442 14:58:09 -- common/autotest_common.sh@936 -- # '[' -z 89198 ']' 00:17:36.442 14:58:09 -- common/autotest_common.sh@940 -- # kill -0 89198 00:17:36.442 14:58:09 -- common/autotest_common.sh@941 -- # uname 00:17:36.442 14:58:09 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:17:36.442 14:58:09 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 89198 00:17:36.701 killing process with pid 89198 00:17:36.701 Received shutdown signal, test time was about 10.000000 seconds 00:17:36.701 00:17:36.701 Latency(us) 00:17:36.701 [2024-12-01T14:58:09.816Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:36.701 [2024-12-01T14:58:09.816Z] =================================================================================================================== 00:17:36.702 [2024-12-01T14:58:09.817Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:17:36.702 14:58:09 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:17:36.702 14:58:09 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:17:36.702 14:58:09 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 89198' 00:17:36.702 14:58:09 -- common/autotest_common.sh@955 -- # kill 89198 00:17:36.702 14:58:09 -- common/autotest_common.sh@960 -- # wait 89198 00:17:36.702 14:58:09 -- target/tls.sh@179 -- # chmod 0666 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:17:36.702 14:58:09 -- target/tls.sh@180 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:17:36.702 14:58:09 -- common/autotest_common.sh@650 -- # local es=0 00:17:36.702 14:58:09 -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:17:36.702 14:58:09 -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:17:36.702 14:58:09 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:36.702 14:58:09 -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:17:36.702 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:36.702 14:58:09 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:36.702 14:58:09 -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:17:36.702 14:58:09 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:17:36.702 14:58:09 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:17:36.702 14:58:09 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:17:36.702 14:58:09 -- target/tls.sh@23 -- # psk='--psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt' 00:17:36.702 14:58:09 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:17:36.702 14:58:09 -- target/tls.sh@28 -- # bdevperf_pid=89350 00:17:36.702 14:58:09 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:17:36.702 14:58:09 -- target/tls.sh@31 -- # waitforlisten 89350 /var/tmp/bdevperf.sock 00:17:36.702 14:58:09 -- common/autotest_common.sh@829 -- # '[' -z 89350 ']' 00:17:36.702 14:58:09 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:36.702 14:58:09 -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:36.702 14:58:09 -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:17:36.702 14:58:09 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:36.702 14:58:09 -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:36.702 14:58:09 -- common/autotest_common.sh@10 -- # set +x 00:17:36.960 [2024-12-01 14:58:09.830921] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:17:36.960 [2024-12-01 14:58:09.831260] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid89350 ] 00:17:36.960 [2024-12-01 14:58:09.968150] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:36.960 [2024-12-01 14:58:10.019383] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:17:37.895 14:58:10 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:37.895 14:58:10 -- common/autotest_common.sh@862 -- # return 0 00:17:37.895 14:58:10 -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:17:38.154 [2024-12-01 14:58:11.099841] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:17:38.154 [2024-12-01 14:58:11.099887] bdev_nvme_rpc.c: 336:tcp_load_psk: *ERROR*: Incorrect permissions for PSK file 00:17:38.154 2024/12/01 14:58:11 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 hostnqn:nqn.2016-06.io.spdk:host1 name:TLSTEST psk:/home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-22 Msg=Could not retrieve PSK from file: /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:17:38.154 request: 00:17:38.154 { 00:17:38.154 "method": "bdev_nvme_attach_controller", 00:17:38.154 "params": { 00:17:38.154 "name": "TLSTEST", 00:17:38.154 "trtype": "tcp", 00:17:38.154 "traddr": "10.0.0.2", 00:17:38.154 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:17:38.154 "adrfam": "ipv4", 00:17:38.154 "trsvcid": "4420", 00:17:38.154 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:38.154 "psk": "/home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt" 00:17:38.154 } 00:17:38.154 } 00:17:38.154 Got JSON-RPC error response 00:17:38.154 GoRPCClient: error on JSON-RPC call 00:17:38.154 14:58:11 -- target/tls.sh@36 -- # killprocess 89350 00:17:38.154 14:58:11 -- common/autotest_common.sh@936 -- # '[' -z 89350 ']' 00:17:38.154 14:58:11 -- common/autotest_common.sh@940 -- # kill -0 89350 00:17:38.154 14:58:11 -- common/autotest_common.sh@941 -- # uname 00:17:38.154 14:58:11 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:17:38.154 14:58:11 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 89350 00:17:38.154 14:58:11 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:17:38.154 14:58:11 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:17:38.154 killing process with pid 89350 00:17:38.154 14:58:11 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 89350' 00:17:38.154 Received shutdown signal, test time was about 10.000000 seconds 00:17:38.154 00:17:38.154 Latency(us) 00:17:38.154 [2024-12-01T14:58:11.269Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:38.154 [2024-12-01T14:58:11.269Z] =================================================================================================================== 00:17:38.154 [2024-12-01T14:58:11.269Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:17:38.154 14:58:11 -- common/autotest_common.sh@955 -- # kill 89350 00:17:38.154 14:58:11 -- common/autotest_common.sh@960 -- # wait 89350 00:17:38.414 14:58:11 -- target/tls.sh@37 -- # return 1 00:17:38.414 14:58:11 -- common/autotest_common.sh@653 -- # es=1 00:17:38.414 14:58:11 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:17:38.414 14:58:11 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:17:38.414 14:58:11 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:17:38.414 14:58:11 -- target/tls.sh@183 -- # killprocess 89101 00:17:38.414 14:58:11 -- common/autotest_common.sh@936 -- # '[' -z 89101 ']' 00:17:38.414 14:58:11 -- common/autotest_common.sh@940 -- # kill -0 89101 00:17:38.414 14:58:11 -- common/autotest_common.sh@941 -- # uname 00:17:38.414 14:58:11 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:17:38.414 14:58:11 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 89101 00:17:38.414 14:58:11 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:17:38.414 14:58:11 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:17:38.414 killing process with pid 89101 00:17:38.414 14:58:11 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 89101' 00:17:38.414 14:58:11 -- common/autotest_common.sh@955 -- # kill 89101 00:17:38.414 14:58:11 -- common/autotest_common.sh@960 -- # wait 89101 00:17:38.674 14:58:11 -- target/tls.sh@184 -- # nvmfappstart -m 0x2 00:17:38.674 14:58:11 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:17:38.674 14:58:11 -- common/autotest_common.sh@722 -- # xtrace_disable 00:17:38.674 14:58:11 -- common/autotest_common.sh@10 -- # set +x 00:17:38.674 14:58:11 -- nvmf/common.sh@469 -- # nvmfpid=89402 00:17:38.674 14:58:11 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:17:38.674 14:58:11 -- nvmf/common.sh@470 -- # waitforlisten 89402 00:17:38.674 14:58:11 -- common/autotest_common.sh@829 -- # '[' -z 89402 ']' 00:17:38.674 14:58:11 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:38.674 14:58:11 -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:38.674 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:38.674 14:58:11 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:38.674 14:58:11 -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:38.674 14:58:11 -- common/autotest_common.sh@10 -- # set +x 00:17:38.674 [2024-12-01 14:58:11.680225] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:17:38.674 [2024-12-01 14:58:11.680297] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:38.933 [2024-12-01 14:58:11.808369] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:38.933 [2024-12-01 14:58:11.897335] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:17:38.933 [2024-12-01 14:58:11.897499] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:38.933 [2024-12-01 14:58:11.897512] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:38.933 [2024-12-01 14:58:11.897519] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:38.933 [2024-12-01 14:58:11.897546] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:17:39.871 14:58:12 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:39.871 14:58:12 -- common/autotest_common.sh@862 -- # return 0 00:17:39.871 14:58:12 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:17:39.871 14:58:12 -- common/autotest_common.sh@728 -- # xtrace_disable 00:17:39.871 14:58:12 -- common/autotest_common.sh@10 -- # set +x 00:17:39.871 14:58:12 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:39.871 14:58:12 -- target/tls.sh@186 -- # NOT setup_nvmf_tgt /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:17:39.871 14:58:12 -- common/autotest_common.sh@650 -- # local es=0 00:17:39.872 14:58:12 -- common/autotest_common.sh@652 -- # valid_exec_arg setup_nvmf_tgt /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:17:39.872 14:58:12 -- common/autotest_common.sh@638 -- # local arg=setup_nvmf_tgt 00:17:39.872 14:58:12 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:39.872 14:58:12 -- common/autotest_common.sh@642 -- # type -t setup_nvmf_tgt 00:17:39.872 14:58:12 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:39.872 14:58:12 -- common/autotest_common.sh@653 -- # setup_nvmf_tgt /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:17:39.872 14:58:12 -- target/tls.sh@58 -- # local key=/home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:17:39.872 14:58:12 -- target/tls.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:17:39.872 [2024-12-01 14:58:12.940837] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:39.872 14:58:12 -- target/tls.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:17:40.130 14:58:13 -- target/tls.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:17:40.390 [2024-12-01 14:58:13.356904] tcp.c: 914:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:17:40.390 [2024-12-01 14:58:13.357227] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:40.390 14:58:13 -- target/tls.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:17:40.649 malloc0 00:17:40.649 14:58:13 -- target/tls.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:17:40.908 14:58:13 -- target/tls.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:17:41.168 [2024-12-01 14:58:14.096305] tcp.c:3551:tcp_load_psk: *ERROR*: Incorrect permissions for PSK file 00:17:41.168 [2024-12-01 14:58:14.096341] tcp.c:3620:nvmf_tcp_subsystem_add_host: *ERROR*: Could not retrieve PSK from file 00:17:41.168 [2024-12-01 14:58:14.096373] subsystem.c: 880:spdk_nvmf_subsystem_add_host: *ERROR*: Unable to add host to TCP transport 00:17:41.168 2024/12/01 14:58:14 error on JSON-RPC call, method: nvmf_subsystem_add_host, params: map[host:nqn.2016-06.io.spdk:host1 nqn:nqn.2016-06.io.spdk:cnode1 psk:/home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt], err: error received for nvmf_subsystem_add_host method, err: Code=-32603 Msg=Internal error 00:17:41.168 request: 00:17:41.168 { 00:17:41.168 "method": "nvmf_subsystem_add_host", 00:17:41.168 "params": { 00:17:41.168 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:17:41.168 "host": "nqn.2016-06.io.spdk:host1", 00:17:41.168 "psk": "/home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt" 00:17:41.168 } 00:17:41.168 } 00:17:41.168 Got JSON-RPC error response 00:17:41.168 GoRPCClient: error on JSON-RPC call 00:17:41.168 14:58:14 -- common/autotest_common.sh@653 -- # es=1 00:17:41.168 14:58:14 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:17:41.168 14:58:14 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:17:41.168 14:58:14 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:17:41.168 14:58:14 -- target/tls.sh@189 -- # killprocess 89402 00:17:41.168 14:58:14 -- common/autotest_common.sh@936 -- # '[' -z 89402 ']' 00:17:41.168 14:58:14 -- common/autotest_common.sh@940 -- # kill -0 89402 00:17:41.168 14:58:14 -- common/autotest_common.sh@941 -- # uname 00:17:41.168 14:58:14 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:17:41.168 14:58:14 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 89402 00:17:41.168 14:58:14 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:17:41.168 14:58:14 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:17:41.168 killing process with pid 89402 00:17:41.168 14:58:14 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 89402' 00:17:41.168 14:58:14 -- common/autotest_common.sh@955 -- # kill 89402 00:17:41.168 14:58:14 -- common/autotest_common.sh@960 -- # wait 89402 00:17:41.428 14:58:14 -- target/tls.sh@190 -- # chmod 0600 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:17:41.428 14:58:14 -- target/tls.sh@193 -- # nvmfappstart -m 0x2 00:17:41.428 14:58:14 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:17:41.428 14:58:14 -- common/autotest_common.sh@722 -- # xtrace_disable 00:17:41.428 14:58:14 -- common/autotest_common.sh@10 -- # set +x 00:17:41.428 14:58:14 -- nvmf/common.sh@469 -- # nvmfpid=89517 00:17:41.428 14:58:14 -- nvmf/common.sh@470 -- # waitforlisten 89517 00:17:41.428 14:58:14 -- common/autotest_common.sh@829 -- # '[' -z 89517 ']' 00:17:41.428 14:58:14 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:17:41.428 14:58:14 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:41.428 14:58:14 -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:41.428 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:41.428 14:58:14 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:41.428 14:58:14 -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:41.428 14:58:14 -- common/autotest_common.sh@10 -- # set +x 00:17:41.428 [2024-12-01 14:58:14.476623] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:17:41.428 [2024-12-01 14:58:14.476708] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:41.686 [2024-12-01 14:58:14.609795] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:41.686 [2024-12-01 14:58:14.673913] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:17:41.686 [2024-12-01 14:58:14.674054] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:41.686 [2024-12-01 14:58:14.674068] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:41.686 [2024-12-01 14:58:14.674076] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:41.686 [2024-12-01 14:58:14.674102] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:17:42.620 14:58:15 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:42.620 14:58:15 -- common/autotest_common.sh@862 -- # return 0 00:17:42.620 14:58:15 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:17:42.620 14:58:15 -- common/autotest_common.sh@728 -- # xtrace_disable 00:17:42.620 14:58:15 -- common/autotest_common.sh@10 -- # set +x 00:17:42.620 14:58:15 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:42.620 14:58:15 -- target/tls.sh@194 -- # setup_nvmf_tgt /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:17:42.620 14:58:15 -- target/tls.sh@58 -- # local key=/home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:17:42.620 14:58:15 -- target/tls.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:17:42.879 [2024-12-01 14:58:15.778936] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:42.879 14:58:15 -- target/tls.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:17:42.879 14:58:15 -- target/tls.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:17:43.138 [2024-12-01 14:58:16.243008] tcp.c: 914:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:17:43.138 [2024-12-01 14:58:16.243312] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:43.397 14:58:16 -- target/tls.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:17:43.397 malloc0 00:17:43.397 14:58:16 -- target/tls.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:17:43.655 14:58:16 -- target/tls.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:17:43.914 14:58:16 -- target/tls.sh@196 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:17:43.914 14:58:16 -- target/tls.sh@197 -- # bdevperf_pid=89615 00:17:43.914 14:58:16 -- target/tls.sh@199 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:17:43.914 14:58:16 -- target/tls.sh@200 -- # waitforlisten 89615 /var/tmp/bdevperf.sock 00:17:43.914 14:58:16 -- common/autotest_common.sh@829 -- # '[' -z 89615 ']' 00:17:43.914 14:58:16 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:43.914 14:58:16 -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:43.914 14:58:16 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:43.914 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:43.914 14:58:16 -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:43.914 14:58:16 -- common/autotest_common.sh@10 -- # set +x 00:17:43.914 [2024-12-01 14:58:16.911380] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:17:43.914 [2024-12-01 14:58:16.911450] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid89615 ] 00:17:44.173 [2024-12-01 14:58:17.047647] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:44.173 [2024-12-01 14:58:17.113988] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:17:45.108 14:58:17 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:45.108 14:58:17 -- common/autotest_common.sh@862 -- # return 0 00:17:45.108 14:58:17 -- target/tls.sh@201 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:17:45.108 [2024-12-01 14:58:18.035727] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:17:45.108 TLSTESTn1 00:17:45.108 14:58:18 -- target/tls.sh@205 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:17:45.675 14:58:18 -- target/tls.sh@205 -- # tgtconf='{ 00:17:45.675 "subsystems": [ 00:17:45.675 { 00:17:45.675 "subsystem": "iobuf", 00:17:45.675 "config": [ 00:17:45.675 { 00:17:45.675 "method": "iobuf_set_options", 00:17:45.675 "params": { 00:17:45.675 "large_bufsize": 135168, 00:17:45.675 "large_pool_count": 1024, 00:17:45.675 "small_bufsize": 8192, 00:17:45.675 "small_pool_count": 8192 00:17:45.675 } 00:17:45.675 } 00:17:45.675 ] 00:17:45.675 }, 00:17:45.675 { 00:17:45.675 "subsystem": "sock", 00:17:45.675 "config": [ 00:17:45.675 { 00:17:45.675 "method": "sock_impl_set_options", 00:17:45.675 "params": { 00:17:45.675 "enable_ktls": false, 00:17:45.675 "enable_placement_id": 0, 00:17:45.675 "enable_quickack": false, 00:17:45.675 "enable_recv_pipe": true, 00:17:45.675 "enable_zerocopy_send_client": false, 00:17:45.675 "enable_zerocopy_send_server": true, 00:17:45.675 "impl_name": "posix", 00:17:45.675 "recv_buf_size": 2097152, 00:17:45.675 "send_buf_size": 2097152, 00:17:45.675 "tls_version": 0, 00:17:45.675 "zerocopy_threshold": 0 00:17:45.675 } 00:17:45.675 }, 00:17:45.675 { 00:17:45.675 "method": "sock_impl_set_options", 00:17:45.675 "params": { 00:17:45.675 "enable_ktls": false, 00:17:45.675 "enable_placement_id": 0, 00:17:45.675 "enable_quickack": false, 00:17:45.675 "enable_recv_pipe": true, 00:17:45.675 "enable_zerocopy_send_client": false, 00:17:45.675 "enable_zerocopy_send_server": true, 00:17:45.675 "impl_name": "ssl", 00:17:45.675 "recv_buf_size": 4096, 00:17:45.675 "send_buf_size": 4096, 00:17:45.675 "tls_version": 0, 00:17:45.675 "zerocopy_threshold": 0 00:17:45.675 } 00:17:45.675 } 00:17:45.675 ] 00:17:45.675 }, 00:17:45.675 { 00:17:45.675 "subsystem": "vmd", 00:17:45.675 "config": [] 00:17:45.675 }, 00:17:45.675 { 00:17:45.675 "subsystem": "accel", 00:17:45.675 "config": [ 00:17:45.675 { 00:17:45.675 "method": "accel_set_options", 00:17:45.675 "params": { 00:17:45.675 "buf_count": 2048, 00:17:45.675 "large_cache_size": 16, 00:17:45.675 "sequence_count": 2048, 00:17:45.675 "small_cache_size": 128, 00:17:45.675 "task_count": 2048 00:17:45.675 } 00:17:45.675 } 00:17:45.675 ] 00:17:45.675 }, 00:17:45.675 { 00:17:45.675 "subsystem": "bdev", 00:17:45.675 "config": [ 00:17:45.675 { 00:17:45.675 "method": "bdev_set_options", 00:17:45.675 "params": { 00:17:45.675 "bdev_auto_examine": true, 00:17:45.675 "bdev_io_cache_size": 256, 00:17:45.675 "bdev_io_pool_size": 65535, 00:17:45.675 "iobuf_large_cache_size": 16, 00:17:45.675 "iobuf_small_cache_size": 128 00:17:45.675 } 00:17:45.675 }, 00:17:45.675 { 00:17:45.675 "method": "bdev_raid_set_options", 00:17:45.675 "params": { 00:17:45.675 "process_window_size_kb": 1024 00:17:45.675 } 00:17:45.675 }, 00:17:45.675 { 00:17:45.675 "method": "bdev_iscsi_set_options", 00:17:45.675 "params": { 00:17:45.675 "timeout_sec": 30 00:17:45.675 } 00:17:45.675 }, 00:17:45.675 { 00:17:45.675 "method": "bdev_nvme_set_options", 00:17:45.675 "params": { 00:17:45.675 "action_on_timeout": "none", 00:17:45.675 "allow_accel_sequence": false, 00:17:45.675 "arbitration_burst": 0, 00:17:45.675 "bdev_retry_count": 3, 00:17:45.675 "ctrlr_loss_timeout_sec": 0, 00:17:45.675 "delay_cmd_submit": true, 00:17:45.675 "fast_io_fail_timeout_sec": 0, 00:17:45.675 "generate_uuids": false, 00:17:45.675 "high_priority_weight": 0, 00:17:45.675 "io_path_stat": false, 00:17:45.675 "io_queue_requests": 0, 00:17:45.675 "keep_alive_timeout_ms": 10000, 00:17:45.675 "low_priority_weight": 0, 00:17:45.675 "medium_priority_weight": 0, 00:17:45.675 "nvme_adminq_poll_period_us": 10000, 00:17:45.675 "nvme_ioq_poll_period_us": 0, 00:17:45.675 "reconnect_delay_sec": 0, 00:17:45.675 "timeout_admin_us": 0, 00:17:45.675 "timeout_us": 0, 00:17:45.675 "transport_ack_timeout": 0, 00:17:45.675 "transport_retry_count": 4, 00:17:45.675 "transport_tos": 0 00:17:45.675 } 00:17:45.675 }, 00:17:45.675 { 00:17:45.675 "method": "bdev_nvme_set_hotplug", 00:17:45.676 "params": { 00:17:45.676 "enable": false, 00:17:45.676 "period_us": 100000 00:17:45.676 } 00:17:45.676 }, 00:17:45.676 { 00:17:45.676 "method": "bdev_malloc_create", 00:17:45.676 "params": { 00:17:45.676 "block_size": 4096, 00:17:45.676 "name": "malloc0", 00:17:45.676 "num_blocks": 8192, 00:17:45.676 "optimal_io_boundary": 0, 00:17:45.676 "physical_block_size": 4096, 00:17:45.676 "uuid": "b6008983-1fcb-48e2-8d7b-bb885c6943f0" 00:17:45.676 } 00:17:45.676 }, 00:17:45.676 { 00:17:45.676 "method": "bdev_wait_for_examine" 00:17:45.676 } 00:17:45.676 ] 00:17:45.676 }, 00:17:45.676 { 00:17:45.676 "subsystem": "nbd", 00:17:45.676 "config": [] 00:17:45.676 }, 00:17:45.676 { 00:17:45.676 "subsystem": "scheduler", 00:17:45.676 "config": [ 00:17:45.676 { 00:17:45.676 "method": "framework_set_scheduler", 00:17:45.676 "params": { 00:17:45.676 "name": "static" 00:17:45.676 } 00:17:45.676 } 00:17:45.676 ] 00:17:45.676 }, 00:17:45.676 { 00:17:45.676 "subsystem": "nvmf", 00:17:45.676 "config": [ 00:17:45.676 { 00:17:45.676 "method": "nvmf_set_config", 00:17:45.676 "params": { 00:17:45.676 "admin_cmd_passthru": { 00:17:45.676 "identify_ctrlr": false 00:17:45.676 }, 00:17:45.676 "discovery_filter": "match_any" 00:17:45.676 } 00:17:45.676 }, 00:17:45.676 { 00:17:45.676 "method": "nvmf_set_max_subsystems", 00:17:45.676 "params": { 00:17:45.676 "max_subsystems": 1024 00:17:45.676 } 00:17:45.676 }, 00:17:45.676 { 00:17:45.676 "method": "nvmf_set_crdt", 00:17:45.676 "params": { 00:17:45.676 "crdt1": 0, 00:17:45.676 "crdt2": 0, 00:17:45.676 "crdt3": 0 00:17:45.676 } 00:17:45.676 }, 00:17:45.676 { 00:17:45.676 "method": "nvmf_create_transport", 00:17:45.676 "params": { 00:17:45.676 "abort_timeout_sec": 1, 00:17:45.676 "buf_cache_size": 4294967295, 00:17:45.676 "c2h_success": false, 00:17:45.676 "dif_insert_or_strip": false, 00:17:45.676 "in_capsule_data_size": 4096, 00:17:45.676 "io_unit_size": 131072, 00:17:45.676 "max_aq_depth": 128, 00:17:45.676 "max_io_qpairs_per_ctrlr": 127, 00:17:45.676 "max_io_size": 131072, 00:17:45.676 "max_queue_depth": 128, 00:17:45.676 "num_shared_buffers": 511, 00:17:45.676 "sock_priority": 0, 00:17:45.676 "trtype": "TCP", 00:17:45.676 "zcopy": false 00:17:45.676 } 00:17:45.676 }, 00:17:45.676 { 00:17:45.676 "method": "nvmf_create_subsystem", 00:17:45.676 "params": { 00:17:45.676 "allow_any_host": false, 00:17:45.676 "ana_reporting": false, 00:17:45.676 "max_cntlid": 65519, 00:17:45.676 "max_namespaces": 10, 00:17:45.676 "min_cntlid": 1, 00:17:45.676 "model_number": "SPDK bdev Controller", 00:17:45.676 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:17:45.676 "serial_number": "SPDK00000000000001" 00:17:45.676 } 00:17:45.676 }, 00:17:45.676 { 00:17:45.676 "method": "nvmf_subsystem_add_host", 00:17:45.676 "params": { 00:17:45.676 "host": "nqn.2016-06.io.spdk:host1", 00:17:45.676 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:17:45.676 "psk": "/home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt" 00:17:45.676 } 00:17:45.676 }, 00:17:45.676 { 00:17:45.676 "method": "nvmf_subsystem_add_ns", 00:17:45.676 "params": { 00:17:45.676 "namespace": { 00:17:45.676 "bdev_name": "malloc0", 00:17:45.676 "nguid": "B60089831FCB48E28D7BBB885C6943F0", 00:17:45.676 "nsid": 1, 00:17:45.676 "uuid": "b6008983-1fcb-48e2-8d7b-bb885c6943f0" 00:17:45.676 }, 00:17:45.676 "nqn": "nqn.2016-06.io.spdk:cnode1" 00:17:45.676 } 00:17:45.676 }, 00:17:45.676 { 00:17:45.676 "method": "nvmf_subsystem_add_listener", 00:17:45.676 "params": { 00:17:45.676 "listen_address": { 00:17:45.676 "adrfam": "IPv4", 00:17:45.676 "traddr": "10.0.0.2", 00:17:45.676 "trsvcid": "4420", 00:17:45.676 "trtype": "TCP" 00:17:45.676 }, 00:17:45.676 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:17:45.676 "secure_channel": true 00:17:45.676 } 00:17:45.676 } 00:17:45.676 ] 00:17:45.676 } 00:17:45.676 ] 00:17:45.676 }' 00:17:45.676 14:58:18 -- target/tls.sh@206 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:17:45.935 14:58:18 -- target/tls.sh@206 -- # bdevperfconf='{ 00:17:45.935 "subsystems": [ 00:17:45.935 { 00:17:45.935 "subsystem": "iobuf", 00:17:45.935 "config": [ 00:17:45.935 { 00:17:45.935 "method": "iobuf_set_options", 00:17:45.935 "params": { 00:17:45.935 "large_bufsize": 135168, 00:17:45.935 "large_pool_count": 1024, 00:17:45.935 "small_bufsize": 8192, 00:17:45.935 "small_pool_count": 8192 00:17:45.935 } 00:17:45.935 } 00:17:45.935 ] 00:17:45.935 }, 00:17:45.935 { 00:17:45.935 "subsystem": "sock", 00:17:45.935 "config": [ 00:17:45.935 { 00:17:45.935 "method": "sock_impl_set_options", 00:17:45.935 "params": { 00:17:45.935 "enable_ktls": false, 00:17:45.935 "enable_placement_id": 0, 00:17:45.935 "enable_quickack": false, 00:17:45.935 "enable_recv_pipe": true, 00:17:45.935 "enable_zerocopy_send_client": false, 00:17:45.935 "enable_zerocopy_send_server": true, 00:17:45.935 "impl_name": "posix", 00:17:45.935 "recv_buf_size": 2097152, 00:17:45.935 "send_buf_size": 2097152, 00:17:45.935 "tls_version": 0, 00:17:45.935 "zerocopy_threshold": 0 00:17:45.935 } 00:17:45.935 }, 00:17:45.935 { 00:17:45.935 "method": "sock_impl_set_options", 00:17:45.935 "params": { 00:17:45.935 "enable_ktls": false, 00:17:45.935 "enable_placement_id": 0, 00:17:45.935 "enable_quickack": false, 00:17:45.935 "enable_recv_pipe": true, 00:17:45.935 "enable_zerocopy_send_client": false, 00:17:45.935 "enable_zerocopy_send_server": true, 00:17:45.935 "impl_name": "ssl", 00:17:45.935 "recv_buf_size": 4096, 00:17:45.935 "send_buf_size": 4096, 00:17:45.935 "tls_version": 0, 00:17:45.935 "zerocopy_threshold": 0 00:17:45.935 } 00:17:45.935 } 00:17:45.935 ] 00:17:45.935 }, 00:17:45.935 { 00:17:45.935 "subsystem": "vmd", 00:17:45.935 "config": [] 00:17:45.935 }, 00:17:45.935 { 00:17:45.935 "subsystem": "accel", 00:17:45.935 "config": [ 00:17:45.935 { 00:17:45.935 "method": "accel_set_options", 00:17:45.935 "params": { 00:17:45.935 "buf_count": 2048, 00:17:45.935 "large_cache_size": 16, 00:17:45.935 "sequence_count": 2048, 00:17:45.935 "small_cache_size": 128, 00:17:45.935 "task_count": 2048 00:17:45.935 } 00:17:45.935 } 00:17:45.935 ] 00:17:45.935 }, 00:17:45.935 { 00:17:45.935 "subsystem": "bdev", 00:17:45.935 "config": [ 00:17:45.935 { 00:17:45.935 "method": "bdev_set_options", 00:17:45.935 "params": { 00:17:45.935 "bdev_auto_examine": true, 00:17:45.935 "bdev_io_cache_size": 256, 00:17:45.935 "bdev_io_pool_size": 65535, 00:17:45.935 "iobuf_large_cache_size": 16, 00:17:45.935 "iobuf_small_cache_size": 128 00:17:45.935 } 00:17:45.935 }, 00:17:45.935 { 00:17:45.935 "method": "bdev_raid_set_options", 00:17:45.935 "params": { 00:17:45.935 "process_window_size_kb": 1024 00:17:45.935 } 00:17:45.935 }, 00:17:45.935 { 00:17:45.935 "method": "bdev_iscsi_set_options", 00:17:45.935 "params": { 00:17:45.935 "timeout_sec": 30 00:17:45.935 } 00:17:45.935 }, 00:17:45.935 { 00:17:45.935 "method": "bdev_nvme_set_options", 00:17:45.935 "params": { 00:17:45.935 "action_on_timeout": "none", 00:17:45.935 "allow_accel_sequence": false, 00:17:45.935 "arbitration_burst": 0, 00:17:45.935 "bdev_retry_count": 3, 00:17:45.935 "ctrlr_loss_timeout_sec": 0, 00:17:45.935 "delay_cmd_submit": true, 00:17:45.936 "fast_io_fail_timeout_sec": 0, 00:17:45.936 "generate_uuids": false, 00:17:45.936 "high_priority_weight": 0, 00:17:45.936 "io_path_stat": false, 00:17:45.936 "io_queue_requests": 512, 00:17:45.936 "keep_alive_timeout_ms": 10000, 00:17:45.936 "low_priority_weight": 0, 00:17:45.936 "medium_priority_weight": 0, 00:17:45.936 "nvme_adminq_poll_period_us": 10000, 00:17:45.936 "nvme_ioq_poll_period_us": 0, 00:17:45.936 "reconnect_delay_sec": 0, 00:17:45.936 "timeout_admin_us": 0, 00:17:45.936 "timeout_us": 0, 00:17:45.936 "transport_ack_timeout": 0, 00:17:45.936 "transport_retry_count": 4, 00:17:45.936 "transport_tos": 0 00:17:45.936 } 00:17:45.936 }, 00:17:45.936 { 00:17:45.936 "method": "bdev_nvme_attach_controller", 00:17:45.936 "params": { 00:17:45.936 "adrfam": "IPv4", 00:17:45.936 "ctrlr_loss_timeout_sec": 0, 00:17:45.936 "ddgst": false, 00:17:45.936 "fast_io_fail_timeout_sec": 0, 00:17:45.936 "hdgst": false, 00:17:45.936 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:17:45.936 "name": "TLSTEST", 00:17:45.936 "prchk_guard": false, 00:17:45.936 "prchk_reftag": false, 00:17:45.936 "psk": "/home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt", 00:17:45.936 "reconnect_delay_sec": 0, 00:17:45.936 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:45.936 "traddr": "10.0.0.2", 00:17:45.936 "trsvcid": "4420", 00:17:45.936 "trtype": "TCP" 00:17:45.936 } 00:17:45.936 }, 00:17:45.936 { 00:17:45.936 "method": "bdev_nvme_set_hotplug", 00:17:45.936 "params": { 00:17:45.936 "enable": false, 00:17:45.936 "period_us": 100000 00:17:45.936 } 00:17:45.936 }, 00:17:45.936 { 00:17:45.936 "method": "bdev_wait_for_examine" 00:17:45.936 } 00:17:45.936 ] 00:17:45.936 }, 00:17:45.936 { 00:17:45.936 "subsystem": "nbd", 00:17:45.936 "config": [] 00:17:45.936 } 00:17:45.936 ] 00:17:45.936 }' 00:17:45.936 14:58:18 -- target/tls.sh@208 -- # killprocess 89615 00:17:45.936 14:58:18 -- common/autotest_common.sh@936 -- # '[' -z 89615 ']' 00:17:45.936 14:58:18 -- common/autotest_common.sh@940 -- # kill -0 89615 00:17:45.936 14:58:18 -- common/autotest_common.sh@941 -- # uname 00:17:45.936 14:58:18 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:17:45.936 14:58:18 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 89615 00:17:45.936 14:58:18 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:17:45.936 14:58:18 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:17:45.936 killing process with pid 89615 00:17:45.936 14:58:18 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 89615' 00:17:45.936 Received shutdown signal, test time was about 10.000000 seconds 00:17:45.936 00:17:45.936 Latency(us) 00:17:45.936 [2024-12-01T14:58:19.051Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:45.936 [2024-12-01T14:58:19.051Z] =================================================================================================================== 00:17:45.936 [2024-12-01T14:58:19.051Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:17:45.936 14:58:18 -- common/autotest_common.sh@955 -- # kill 89615 00:17:45.936 14:58:18 -- common/autotest_common.sh@960 -- # wait 89615 00:17:45.936 14:58:19 -- target/tls.sh@209 -- # killprocess 89517 00:17:45.936 14:58:19 -- common/autotest_common.sh@936 -- # '[' -z 89517 ']' 00:17:45.936 14:58:19 -- common/autotest_common.sh@940 -- # kill -0 89517 00:17:45.936 14:58:19 -- common/autotest_common.sh@941 -- # uname 00:17:45.936 14:58:19 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:17:45.936 14:58:19 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 89517 00:17:45.936 14:58:19 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:17:45.936 14:58:19 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:17:45.936 killing process with pid 89517 00:17:45.936 14:58:19 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 89517' 00:17:45.936 14:58:19 -- common/autotest_common.sh@955 -- # kill 89517 00:17:45.936 14:58:19 -- common/autotest_common.sh@960 -- # wait 89517 00:17:46.195 14:58:19 -- target/tls.sh@212 -- # nvmfappstart -m 0x2 -c /dev/fd/62 00:17:46.195 14:58:19 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:17:46.195 14:58:19 -- common/autotest_common.sh@722 -- # xtrace_disable 00:17:46.195 14:58:19 -- target/tls.sh@212 -- # echo '{ 00:17:46.195 "subsystems": [ 00:17:46.195 { 00:17:46.195 "subsystem": "iobuf", 00:17:46.195 "config": [ 00:17:46.195 { 00:17:46.195 "method": "iobuf_set_options", 00:17:46.195 "params": { 00:17:46.195 "large_bufsize": 135168, 00:17:46.195 "large_pool_count": 1024, 00:17:46.195 "small_bufsize": 8192, 00:17:46.195 "small_pool_count": 8192 00:17:46.195 } 00:17:46.195 } 00:17:46.195 ] 00:17:46.195 }, 00:17:46.195 { 00:17:46.195 "subsystem": "sock", 00:17:46.195 "config": [ 00:17:46.195 { 00:17:46.195 "method": "sock_impl_set_options", 00:17:46.195 "params": { 00:17:46.195 "enable_ktls": false, 00:17:46.195 "enable_placement_id": 0, 00:17:46.195 "enable_quickack": false, 00:17:46.195 "enable_recv_pipe": true, 00:17:46.195 "enable_zerocopy_send_client": false, 00:17:46.195 "enable_zerocopy_send_server": true, 00:17:46.195 "impl_name": "posix", 00:17:46.195 "recv_buf_size": 2097152, 00:17:46.195 "send_buf_size": 2097152, 00:17:46.195 "tls_version": 0, 00:17:46.195 "zerocopy_threshold": 0 00:17:46.195 } 00:17:46.195 }, 00:17:46.195 { 00:17:46.195 "method": "sock_impl_set_options", 00:17:46.195 "params": { 00:17:46.195 "enable_ktls": false, 00:17:46.195 "enable_placement_id": 0, 00:17:46.195 "enable_quickack": false, 00:17:46.195 "enable_recv_pipe": true, 00:17:46.195 "enable_zerocopy_send_client": false, 00:17:46.195 "enable_zerocopy_send_server": true, 00:17:46.195 "impl_name": "ssl", 00:17:46.195 "recv_buf_size": 4096, 00:17:46.195 "send_buf_size": 4096, 00:17:46.195 "tls_version": 0, 00:17:46.195 "zerocopy_threshold": 0 00:17:46.195 } 00:17:46.195 } 00:17:46.195 ] 00:17:46.195 }, 00:17:46.195 { 00:17:46.195 "subsystem": "vmd", 00:17:46.195 "config": [] 00:17:46.195 }, 00:17:46.195 { 00:17:46.195 "subsystem": "accel", 00:17:46.195 "config": [ 00:17:46.195 { 00:17:46.195 "method": "accel_set_options", 00:17:46.195 "params": { 00:17:46.195 "buf_count": 2048, 00:17:46.195 "large_cache_size": 16, 00:17:46.195 "sequence_count": 2048, 00:17:46.195 "small_cache_size": 128, 00:17:46.195 "task_count": 2048 00:17:46.195 } 00:17:46.195 } 00:17:46.195 ] 00:17:46.195 }, 00:17:46.195 { 00:17:46.195 "subsystem": "bdev", 00:17:46.195 "config": [ 00:17:46.195 { 00:17:46.195 "method": "bdev_set_options", 00:17:46.195 "params": { 00:17:46.195 "bdev_auto_examine": true, 00:17:46.195 "bdev_io_cache_size": 256, 00:17:46.195 "bdev_io_pool_size": 65535, 00:17:46.195 "iobuf_large_cache_size": 16, 00:17:46.195 "iobuf_small_cache_size": 128 00:17:46.195 } 00:17:46.195 }, 00:17:46.195 { 00:17:46.195 "method": "bdev_raid_set_options", 00:17:46.195 "params": { 00:17:46.195 "process_window_size_kb": 1024 00:17:46.195 } 00:17:46.195 }, 00:17:46.195 { 00:17:46.195 "method": "bdev_iscsi_set_options", 00:17:46.195 "params": { 00:17:46.195 "timeout_sec": 30 00:17:46.195 } 00:17:46.195 }, 00:17:46.195 { 00:17:46.195 "method": "bdev_nvme_set_options", 00:17:46.195 "params": { 00:17:46.195 "action_on_timeout": "none", 00:17:46.195 "allow_accel_sequence": false, 00:17:46.195 "arbitration_burst": 0, 00:17:46.195 "bdev_retry_count": 3, 00:17:46.195 "ctrlr_loss_timeout_sec": 0, 00:17:46.195 "delay_cmd_submit": true, 00:17:46.195 "fast_io_fail_timeout_sec": 0, 00:17:46.195 "generate_uuids": false, 00:17:46.195 "high_priority_weight": 0, 00:17:46.195 "io_path_stat": false, 00:17:46.195 "io_queue_requests": 0, 00:17:46.195 "keep_alive_timeout_ms": 10000, 00:17:46.195 "low_priority_weight": 0, 00:17:46.195 "medium_priority_weight": 0, 00:17:46.195 "nvme_adminq_poll_period_us": 10000, 00:17:46.195 "nvme_ioq_poll_period_us": 0, 00:17:46.195 "reconnect_delay_sec": 0, 00:17:46.195 "timeout_admin_us": 0, 00:17:46.195 "timeout_us": 0, 00:17:46.195 "transport_ack_timeout": 0, 00:17:46.195 "transport_retry_count": 4, 00:17:46.195 "transport_tos": 0 00:17:46.195 } 00:17:46.195 }, 00:17:46.195 { 00:17:46.195 "method": "bdev_nvme_set_hotplug", 00:17:46.195 "params": { 00:17:46.195 "enable": false, 00:17:46.195 "period_us": 100000 00:17:46.195 } 00:17:46.195 }, 00:17:46.195 { 00:17:46.195 "method": "bdev_malloc_create", 00:17:46.195 "params": { 00:17:46.195 "block_size": 4096, 00:17:46.195 "name": "malloc0", 00:17:46.195 "num_blocks": 8192, 00:17:46.195 "optimal_io_boundary": 0, 00:17:46.195 "physical_block_size": 4096, 00:17:46.195 "uuid": "b6008983-1fcb-48e2-8d7b-bb885c6943f0" 00:17:46.195 } 00:17:46.195 }, 00:17:46.195 { 00:17:46.195 "method": "bdev_wait_for_examine" 00:17:46.195 } 00:17:46.195 ] 00:17:46.195 }, 00:17:46.195 { 00:17:46.195 "subsystem": "nbd", 00:17:46.195 "config": [] 00:17:46.195 }, 00:17:46.195 { 00:17:46.195 "subsystem": "scheduler", 00:17:46.196 "config": [ 00:17:46.196 { 00:17:46.196 "method": "framework_set_scheduler", 00:17:46.196 "params": { 00:17:46.196 "name": "static" 00:17:46.196 } 00:17:46.196 } 00:17:46.196 ] 00:17:46.196 }, 00:17:46.196 { 00:17:46.196 "subsystem": "nvmf", 00:17:46.196 "config": [ 00:17:46.196 { 00:17:46.196 "method": "nvmf_set_config", 00:17:46.196 "params": { 00:17:46.196 "admin_cmd_passthru": { 00:17:46.196 "identify_ctrlr": false 00:17:46.196 }, 00:17:46.196 "discovery_filter": "match_any" 00:17:46.196 } 00:17:46.196 }, 00:17:46.196 { 00:17:46.196 "method": "nvmf_set_max_subsystems", 00:17:46.196 "params": { 00:17:46.196 "max_subsystems": 1024 00:17:46.196 } 00:17:46.196 }, 00:17:46.196 { 00:17:46.196 "method": "nvmf_set_crdt", 00:17:46.196 "params": { 00:17:46.196 "crdt1": 0, 00:17:46.196 "crdt2": 0, 00:17:46.196 "crdt3": 0 00:17:46.196 } 00:17:46.196 }, 00:17:46.196 { 00:17:46.196 "method": "nvmf_create_transport", 00:17:46.196 "params": { 00:17:46.196 "abort_timeout_sec": 1, 00:17:46.196 "buf_cache_size": 4294967295, 00:17:46.196 "c2h_success": false, 00:17:46.196 "dif_insert_or_strip": false, 00:17:46.196 "in_capsule_data_size": 4096, 00:17:46.196 "io_unit_size": 131072, 00:17:46.196 "max_aq_depth": 128, 00:17:46.196 "max_io_qpairs_per_ctrlr": 127, 00:17:46.196 "max_io_size": 131072, 00:17:46.196 "max_queue_depth": 128, 00:17:46.196 "num_shared_buffers": 511, 00:17:46.196 "sock_priority": 0, 00:17:46.196 "trtype": "TCP", 00:17:46.196 "zcopy": false 00:17:46.196 } 00:17:46.196 }, 00:17:46.196 { 00:17:46.196 "method": "nvmf_create_subsystem", 00:17:46.196 "params": { 00:17:46.196 "allow_any_host": false, 00:17:46.196 "ana_reporting": false, 00:17:46.196 "max_cntlid": 65519, 00:17:46.196 "max_namespaces": 10, 00:17:46.196 "min_cntlid": 1, 00:17:46.196 "model_number": "SPDK bdev Controller", 00:17:46.196 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:17:46.196 "serial_number": "SPDK00000000000001" 00:17:46.196 } 00:17:46.196 }, 00:17:46.196 { 00:17:46.196 "method": "nvmf_subsystem_add_host", 00:17:46.196 "params": { 00:17:46.196 "host": "nqn.2016-06.io.spdk:host1", 00:17:46.196 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:17:46.196 "psk": "/home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt" 00:17:46.196 } 00:17:46.196 }, 00:17:46.196 { 00:17:46.196 "method": "nvmf_subsystem_add_ns", 00:17:46.196 "params": { 00:17:46.196 "namespace": { 00:17:46.196 "bdev_name": "malloc0", 00:17:46.196 "nguid": "B60089831FCB48E28D7BBB885C6943F0", 00:17:46.196 "nsid": 1, 00:17:46.196 "uuid": "b6008983-1fcb-48e2-8d7b-bb885c6943f0" 00:17:46.196 }, 00:17:46.196 "nqn": "nqn.2016-06.io.spdk:cnode1" 00:17:46.196 } 00:17:46.196 }, 00:17:46.196 { 00:17:46.196 "method": "nvmf_subsystem_add_listener", 00:17:46.196 "params": { 00:17:46.196 "listen_address": { 00:17:46.196 "adrfam": "IPv4", 00:17:46.196 "traddr": "10.0.0.2", 00:17:46.196 "trsvcid": "4420", 00:17:46.196 "trtype": "TCP" 00:17:46.196 }, 00:17:46.196 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:17:46.196 "secure_channel": true 00:17:46.196 } 00:17:46.196 } 00:17:46.196 ] 00:17:46.196 } 00:17:46.196 ] 00:17:46.196 }' 00:17:46.196 14:58:19 -- common/autotest_common.sh@10 -- # set +x 00:17:46.455 14:58:19 -- nvmf/common.sh@469 -- # nvmfpid=89694 00:17:46.455 14:58:19 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 -c /dev/fd/62 00:17:46.455 14:58:19 -- nvmf/common.sh@470 -- # waitforlisten 89694 00:17:46.455 14:58:19 -- common/autotest_common.sh@829 -- # '[' -z 89694 ']' 00:17:46.455 14:58:19 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:46.455 14:58:19 -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:46.455 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:46.455 14:58:19 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:46.455 14:58:19 -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:46.455 14:58:19 -- common/autotest_common.sh@10 -- # set +x 00:17:46.455 [2024-12-01 14:58:19.367619] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:17:46.455 [2024-12-01 14:58:19.367727] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:46.455 [2024-12-01 14:58:19.506729] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:46.455 [2024-12-01 14:58:19.568099] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:17:46.455 [2024-12-01 14:58:19.568244] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:46.455 [2024-12-01 14:58:19.568257] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:46.455 [2024-12-01 14:58:19.568265] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:46.714 [2024-12-01 14:58:19.568299] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:17:46.714 [2024-12-01 14:58:19.814705] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:46.973 [2024-12-01 14:58:19.846662] tcp.c: 914:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:17:46.973 [2024-12-01 14:58:19.846907] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:47.232 14:58:20 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:47.232 14:58:20 -- common/autotest_common.sh@862 -- # return 0 00:17:47.232 14:58:20 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:17:47.232 14:58:20 -- common/autotest_common.sh@728 -- # xtrace_disable 00:17:47.232 14:58:20 -- common/autotest_common.sh@10 -- # set +x 00:17:47.232 14:58:20 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:47.232 14:58:20 -- target/tls.sh@216 -- # bdevperf_pid=89738 00:17:47.232 14:58:20 -- target/tls.sh@217 -- # waitforlisten 89738 /var/tmp/bdevperf.sock 00:17:47.232 14:58:20 -- common/autotest_common.sh@829 -- # '[' -z 89738 ']' 00:17:47.232 14:58:20 -- target/tls.sh@213 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -c /dev/fd/63 00:17:47.232 14:58:20 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:47.232 14:58:20 -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:47.232 14:58:20 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:47.232 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:47.232 14:58:20 -- target/tls.sh@213 -- # echo '{ 00:17:47.232 "subsystems": [ 00:17:47.232 { 00:17:47.232 "subsystem": "iobuf", 00:17:47.232 "config": [ 00:17:47.232 { 00:17:47.232 "method": "iobuf_set_options", 00:17:47.232 "params": { 00:17:47.232 "large_bufsize": 135168, 00:17:47.232 "large_pool_count": 1024, 00:17:47.232 "small_bufsize": 8192, 00:17:47.232 "small_pool_count": 8192 00:17:47.232 } 00:17:47.232 } 00:17:47.232 ] 00:17:47.232 }, 00:17:47.232 { 00:17:47.232 "subsystem": "sock", 00:17:47.232 "config": [ 00:17:47.232 { 00:17:47.232 "method": "sock_impl_set_options", 00:17:47.232 "params": { 00:17:47.232 "enable_ktls": false, 00:17:47.232 "enable_placement_id": 0, 00:17:47.232 "enable_quickack": false, 00:17:47.232 "enable_recv_pipe": true, 00:17:47.232 "enable_zerocopy_send_client": false, 00:17:47.232 "enable_zerocopy_send_server": true, 00:17:47.232 "impl_name": "posix", 00:17:47.232 "recv_buf_size": 2097152, 00:17:47.232 "send_buf_size": 2097152, 00:17:47.232 "tls_version": 0, 00:17:47.232 "zerocopy_threshold": 0 00:17:47.232 } 00:17:47.232 }, 00:17:47.232 { 00:17:47.232 "method": "sock_impl_set_options", 00:17:47.232 "params": { 00:17:47.232 "enable_ktls": false, 00:17:47.232 "enable_placement_id": 0, 00:17:47.232 "enable_quickack": false, 00:17:47.232 "enable_recv_pipe": true, 00:17:47.232 "enable_zerocopy_send_client": false, 00:17:47.232 "enable_zerocopy_send_server": true, 00:17:47.232 "impl_name": "ssl", 00:17:47.232 "recv_buf_size": 4096, 00:17:47.232 "send_buf_size": 4096, 00:17:47.232 "tls_version": 0, 00:17:47.232 "zerocopy_threshold": 0 00:17:47.232 } 00:17:47.232 } 00:17:47.232 ] 00:17:47.232 }, 00:17:47.232 { 00:17:47.232 "subsystem": "vmd", 00:17:47.232 "config": [] 00:17:47.232 }, 00:17:47.232 { 00:17:47.232 "subsystem": "accel", 00:17:47.232 "config": [ 00:17:47.232 { 00:17:47.232 "method": "accel_set_options", 00:17:47.232 "params": { 00:17:47.232 "buf_count": 2048, 00:17:47.232 "large_cache_size": 16, 00:17:47.232 "sequence_count": 2048, 00:17:47.232 "small_cache_size": 128, 00:17:47.232 "task_count": 2048 00:17:47.232 } 00:17:47.232 } 00:17:47.232 ] 00:17:47.232 }, 00:17:47.232 { 00:17:47.232 "subsystem": "bdev", 00:17:47.232 "config": [ 00:17:47.232 { 00:17:47.233 "method": "bdev_set_options", 00:17:47.233 "params": { 00:17:47.233 "bdev_auto_examine": true, 00:17:47.233 "bdev_io_cache_size": 256, 00:17:47.233 "bdev_io_pool_size": 65535, 00:17:47.233 "iobuf_large_cache_size": 16, 00:17:47.233 "iobuf_small_cache_size": 128 00:17:47.233 } 00:17:47.233 }, 00:17:47.233 { 00:17:47.233 "method": "bdev_raid_set_options", 00:17:47.233 "params": { 00:17:47.233 "process_window_size_kb": 1024 00:17:47.233 } 00:17:47.233 }, 00:17:47.233 { 00:17:47.233 "method": "bdev_iscsi_set_options", 00:17:47.233 "params": { 00:17:47.233 "timeout_sec": 30 00:17:47.233 } 00:17:47.233 }, 00:17:47.233 { 00:17:47.233 "method": "bdev_nvme_set_options", 00:17:47.233 "params": { 00:17:47.233 "action_on_timeout": "none", 00:17:47.233 "allow_accel_sequence": false, 00:17:47.233 "arbitration_burst": 0, 00:17:47.233 "bdev_retry_count": 3, 00:17:47.233 "ctrlr_loss_timeout_sec": 0, 00:17:47.233 "delay_cmd_submit": true, 00:17:47.233 "fast_io_fail_timeout_sec": 0, 00:17:47.233 "generate_uuids": false, 00:17:47.233 "high_priority_weight": 0, 00:17:47.233 "io_path_stat": false, 00:17:47.233 "io_queue_requests": 512, 00:17:47.233 "keep_alive_timeout_ms": 10000, 00:17:47.233 "low_priority_weight": 0, 00:17:47.233 "medium_priority_weight": 0, 00:17:47.233 "nvme_adminq_poll_period_us": 10000, 00:17:47.233 "nvme_ioq_poll_period_us": 0, 00:17:47.233 "reconnect_delay_sec": 0, 00:17:47.233 "timeout_admin_us": 0, 00:17:47.233 "timeout_us": 0, 00:17:47.233 "transport_ack_timeout": 0, 00:17:47.233 "transport_retry_count": 4, 00:17:47.233 "transport_tos": 0 00:17:47.233 } 00:17:47.233 }, 00:17:47.233 { 00:17:47.233 "method": "bdev_nvme_attach_controller", 00:17:47.233 "params": { 00:17:47.233 "adrfam": "IPv4", 00:17:47.233 "ctrlr_loss_timeout_sec": 0, 00:17:47.233 "ddgst": false, 00:17:47.233 "fast_io_fail_timeout_sec": 0, 00:17:47.233 "hdgst": false, 00:17:47.233 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:17:47.233 "name": "TLSTEST", 00:17:47.233 "prchk_guard": false, 00:17:47.233 "prchk_reftag": false, 00:17:47.233 "psk": "/home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt", 00:17:47.233 "reconnect_delay_sec": 0, 00:17:47.233 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:47.233 "traddr": "10.0.0.2", 00:17:47.233 "trsvcid": "4420", 00:17:47.233 "t 14:58:20 -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:47.233 rtype": "TCP" 00:17:47.233 } 00:17:47.233 }, 00:17:47.233 { 00:17:47.233 "method": "bdev_nvme_set_hotplug", 00:17:47.233 "params": { 00:17:47.233 "enable": false, 00:17:47.233 "period_us": 100000 00:17:47.233 } 00:17:47.233 }, 00:17:47.233 { 00:17:47.233 "method": "bdev_wait_for_examine" 00:17:47.233 } 00:17:47.233 ] 00:17:47.233 }, 00:17:47.233 { 00:17:47.233 "subsystem": "nbd", 00:17:47.233 "config": [] 00:17:47.233 } 00:17:47.233 ] 00:17:47.233 }' 00:17:47.233 14:58:20 -- common/autotest_common.sh@10 -- # set +x 00:17:47.492 [2024-12-01 14:58:20.356701] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:17:47.492 [2024-12-01 14:58:20.356807] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid89738 ] 00:17:47.492 [2024-12-01 14:58:20.490371] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:47.492 [2024-12-01 14:58:20.545854] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:17:47.751 [2024-12-01 14:58:20.693770] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:17:48.318 14:58:21 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:48.318 14:58:21 -- common/autotest_common.sh@862 -- # return 0 00:17:48.318 14:58:21 -- target/tls.sh@220 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:17:48.318 Running I/O for 10 seconds... 00:17:58.322 00:17:58.322 Latency(us) 00:17:58.322 [2024-12-01T14:58:31.437Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:58.322 [2024-12-01T14:58:31.437Z] Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:17:58.322 Verification LBA range: start 0x0 length 0x2000 00:17:58.322 TLSTESTn1 : 10.01 6420.77 25.08 0.00 0.00 19911.18 2085.24 20375.74 00:17:58.322 [2024-12-01T14:58:31.437Z] =================================================================================================================== 00:17:58.322 [2024-12-01T14:58:31.437Z] Total : 6420.77 25.08 0.00 0.00 19911.18 2085.24 20375.74 00:17:58.580 0 00:17:58.581 14:58:31 -- target/tls.sh@222 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:17:58.581 14:58:31 -- target/tls.sh@223 -- # killprocess 89738 00:17:58.581 14:58:31 -- common/autotest_common.sh@936 -- # '[' -z 89738 ']' 00:17:58.581 14:58:31 -- common/autotest_common.sh@940 -- # kill -0 89738 00:17:58.581 14:58:31 -- common/autotest_common.sh@941 -- # uname 00:17:58.581 14:58:31 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:17:58.581 14:58:31 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 89738 00:17:58.581 14:58:31 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:17:58.581 14:58:31 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:17:58.581 killing process with pid 89738 00:17:58.581 14:58:31 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 89738' 00:17:58.581 14:58:31 -- common/autotest_common.sh@955 -- # kill 89738 00:17:58.581 Received shutdown signal, test time was about 10.000000 seconds 00:17:58.581 00:17:58.581 Latency(us) 00:17:58.581 [2024-12-01T14:58:31.696Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:58.581 [2024-12-01T14:58:31.696Z] =================================================================================================================== 00:17:58.581 [2024-12-01T14:58:31.696Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:17:58.581 14:58:31 -- common/autotest_common.sh@960 -- # wait 89738 00:17:58.581 14:58:31 -- target/tls.sh@224 -- # killprocess 89694 00:17:58.581 14:58:31 -- common/autotest_common.sh@936 -- # '[' -z 89694 ']' 00:17:58.581 14:58:31 -- common/autotest_common.sh@940 -- # kill -0 89694 00:17:58.581 14:58:31 -- common/autotest_common.sh@941 -- # uname 00:17:58.581 14:58:31 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:17:58.581 14:58:31 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 89694 00:17:58.839 14:58:31 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:17:58.839 14:58:31 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:17:58.839 killing process with pid 89694 00:17:58.839 14:58:31 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 89694' 00:17:58.839 14:58:31 -- common/autotest_common.sh@955 -- # kill 89694 00:17:58.839 14:58:31 -- common/autotest_common.sh@960 -- # wait 89694 00:17:59.098 14:58:31 -- target/tls.sh@226 -- # trap - SIGINT SIGTERM EXIT 00:17:59.098 14:58:31 -- target/tls.sh@227 -- # cleanup 00:17:59.098 14:58:31 -- target/tls.sh@15 -- # process_shm --id 0 00:17:59.098 14:58:31 -- common/autotest_common.sh@806 -- # type=--id 00:17:59.098 14:58:31 -- common/autotest_common.sh@807 -- # id=0 00:17:59.098 14:58:31 -- common/autotest_common.sh@808 -- # '[' --id = --pid ']' 00:17:59.098 14:58:31 -- common/autotest_common.sh@812 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:17:59.098 14:58:31 -- common/autotest_common.sh@812 -- # shm_files=nvmf_trace.0 00:17:59.098 14:58:31 -- common/autotest_common.sh@814 -- # [[ -z nvmf_trace.0 ]] 00:17:59.098 14:58:31 -- common/autotest_common.sh@818 -- # for n in $shm_files 00:17:59.098 14:58:31 -- common/autotest_common.sh@819 -- # tar -C /dev/shm/ -cvzf /home/vagrant/spdk_repo/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:17:59.098 nvmf_trace.0 00:17:59.098 14:58:32 -- common/autotest_common.sh@821 -- # return 0 00:17:59.098 14:58:32 -- target/tls.sh@16 -- # killprocess 89738 00:17:59.098 14:58:32 -- common/autotest_common.sh@936 -- # '[' -z 89738 ']' 00:17:59.098 14:58:32 -- common/autotest_common.sh@940 -- # kill -0 89738 00:17:59.098 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 940: kill: (89738) - No such process 00:17:59.098 Process with pid 89738 is not found 00:17:59.098 14:58:32 -- common/autotest_common.sh@963 -- # echo 'Process with pid 89738 is not found' 00:17:59.098 14:58:32 -- target/tls.sh@17 -- # nvmftestfini 00:17:59.098 14:58:32 -- nvmf/common.sh@476 -- # nvmfcleanup 00:17:59.098 14:58:32 -- nvmf/common.sh@116 -- # sync 00:17:59.098 14:58:32 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:17:59.098 14:58:32 -- nvmf/common.sh@119 -- # set +e 00:17:59.098 14:58:32 -- nvmf/common.sh@120 -- # for i in {1..20} 00:17:59.098 14:58:32 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:17:59.098 rmmod nvme_tcp 00:17:59.098 rmmod nvme_fabrics 00:17:59.098 rmmod nvme_keyring 00:17:59.098 14:58:32 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:17:59.098 14:58:32 -- nvmf/common.sh@123 -- # set -e 00:17:59.098 14:58:32 -- nvmf/common.sh@124 -- # return 0 00:17:59.098 14:58:32 -- nvmf/common.sh@477 -- # '[' -n 89694 ']' 00:17:59.098 14:58:32 -- nvmf/common.sh@478 -- # killprocess 89694 00:17:59.098 14:58:32 -- common/autotest_common.sh@936 -- # '[' -z 89694 ']' 00:17:59.098 14:58:32 -- common/autotest_common.sh@940 -- # kill -0 89694 00:17:59.098 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 940: kill: (89694) - No such process 00:17:59.098 Process with pid 89694 is not found 00:17:59.098 14:58:32 -- common/autotest_common.sh@963 -- # echo 'Process with pid 89694 is not found' 00:17:59.098 14:58:32 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:17:59.098 14:58:32 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:17:59.098 14:58:32 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:17:59.098 14:58:32 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:17:59.098 14:58:32 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:17:59.098 14:58:32 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:59.098 14:58:32 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:59.098 14:58:32 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:59.098 14:58:32 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:17:59.098 14:58:32 -- target/tls.sh@18 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt /home/vagrant/spdk_repo/spdk/test/nvmf/target/key2.txt /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:17:59.098 00:17:59.098 real 1m11.123s 00:17:59.098 user 1m45.148s 00:17:59.098 sys 0m27.482s 00:17:59.098 14:58:32 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:17:59.098 ************************************ 00:17:59.098 14:58:32 -- common/autotest_common.sh@10 -- # set +x 00:17:59.098 END TEST nvmf_tls 00:17:59.098 ************************************ 00:17:59.357 14:58:32 -- nvmf/nvmf.sh@60 -- # run_test nvmf_fips /home/vagrant/spdk_repo/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:17:59.357 14:58:32 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:17:59.357 14:58:32 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:17:59.357 14:58:32 -- common/autotest_common.sh@10 -- # set +x 00:17:59.357 ************************************ 00:17:59.357 START TEST nvmf_fips 00:17:59.357 ************************************ 00:17:59.357 14:58:32 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:17:59.357 * Looking for test storage... 00:17:59.357 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/fips 00:17:59.357 14:58:32 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:17:59.357 14:58:32 -- common/autotest_common.sh@1690 -- # lcov --version 00:17:59.357 14:58:32 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:17:59.357 14:58:32 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:17:59.358 14:58:32 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:17:59.358 14:58:32 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:17:59.358 14:58:32 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:17:59.358 14:58:32 -- scripts/common.sh@335 -- # IFS=.-: 00:17:59.358 14:58:32 -- scripts/common.sh@335 -- # read -ra ver1 00:17:59.358 14:58:32 -- scripts/common.sh@336 -- # IFS=.-: 00:17:59.358 14:58:32 -- scripts/common.sh@336 -- # read -ra ver2 00:17:59.358 14:58:32 -- scripts/common.sh@337 -- # local 'op=<' 00:17:59.358 14:58:32 -- scripts/common.sh@339 -- # ver1_l=2 00:17:59.358 14:58:32 -- scripts/common.sh@340 -- # ver2_l=1 00:17:59.358 14:58:32 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:17:59.358 14:58:32 -- scripts/common.sh@343 -- # case "$op" in 00:17:59.358 14:58:32 -- scripts/common.sh@344 -- # : 1 00:17:59.358 14:58:32 -- scripts/common.sh@363 -- # (( v = 0 )) 00:17:59.358 14:58:32 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:59.358 14:58:32 -- scripts/common.sh@364 -- # decimal 1 00:17:59.358 14:58:32 -- scripts/common.sh@352 -- # local d=1 00:17:59.358 14:58:32 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:59.358 14:58:32 -- scripts/common.sh@354 -- # echo 1 00:17:59.358 14:58:32 -- scripts/common.sh@364 -- # ver1[v]=1 00:17:59.358 14:58:32 -- scripts/common.sh@365 -- # decimal 2 00:17:59.358 14:58:32 -- scripts/common.sh@352 -- # local d=2 00:17:59.358 14:58:32 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:59.358 14:58:32 -- scripts/common.sh@354 -- # echo 2 00:17:59.358 14:58:32 -- scripts/common.sh@365 -- # ver2[v]=2 00:17:59.358 14:58:32 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:17:59.358 14:58:32 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:17:59.358 14:58:32 -- scripts/common.sh@367 -- # return 0 00:17:59.358 14:58:32 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:59.358 14:58:32 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:17:59.358 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:59.358 --rc genhtml_branch_coverage=1 00:17:59.358 --rc genhtml_function_coverage=1 00:17:59.358 --rc genhtml_legend=1 00:17:59.358 --rc geninfo_all_blocks=1 00:17:59.358 --rc geninfo_unexecuted_blocks=1 00:17:59.358 00:17:59.358 ' 00:17:59.358 14:58:32 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:17:59.358 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:59.358 --rc genhtml_branch_coverage=1 00:17:59.358 --rc genhtml_function_coverage=1 00:17:59.358 --rc genhtml_legend=1 00:17:59.358 --rc geninfo_all_blocks=1 00:17:59.358 --rc geninfo_unexecuted_blocks=1 00:17:59.358 00:17:59.358 ' 00:17:59.358 14:58:32 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:17:59.358 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:59.358 --rc genhtml_branch_coverage=1 00:17:59.358 --rc genhtml_function_coverage=1 00:17:59.358 --rc genhtml_legend=1 00:17:59.358 --rc geninfo_all_blocks=1 00:17:59.358 --rc geninfo_unexecuted_blocks=1 00:17:59.358 00:17:59.358 ' 00:17:59.358 14:58:32 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:17:59.358 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:59.358 --rc genhtml_branch_coverage=1 00:17:59.358 --rc genhtml_function_coverage=1 00:17:59.358 --rc genhtml_legend=1 00:17:59.358 --rc geninfo_all_blocks=1 00:17:59.358 --rc geninfo_unexecuted_blocks=1 00:17:59.358 00:17:59.358 ' 00:17:59.358 14:58:32 -- fips/fips.sh@11 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:17:59.358 14:58:32 -- nvmf/common.sh@7 -- # uname -s 00:17:59.358 14:58:32 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:59.358 14:58:32 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:59.358 14:58:32 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:59.358 14:58:32 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:59.358 14:58:32 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:59.358 14:58:32 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:59.358 14:58:32 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:59.358 14:58:32 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:59.358 14:58:32 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:59.358 14:58:32 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:59.358 14:58:32 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:2d843004-a791-47f3-8dd7-3d04462c368b 00:17:59.358 14:58:32 -- nvmf/common.sh@18 -- # NVME_HOSTID=2d843004-a791-47f3-8dd7-3d04462c368b 00:17:59.358 14:58:32 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:59.358 14:58:32 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:59.358 14:58:32 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:17:59.358 14:58:32 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:17:59.358 14:58:32 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:59.358 14:58:32 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:59.358 14:58:32 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:59.358 14:58:32 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:59.358 14:58:32 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:59.358 14:58:32 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:59.358 14:58:32 -- paths/export.sh@5 -- # export PATH 00:17:59.358 14:58:32 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:59.358 14:58:32 -- nvmf/common.sh@46 -- # : 0 00:17:59.358 14:58:32 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:17:59.358 14:58:32 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:17:59.358 14:58:32 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:17:59.358 14:58:32 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:59.358 14:58:32 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:59.358 14:58:32 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:17:59.358 14:58:32 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:17:59.358 14:58:32 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:17:59.358 14:58:32 -- fips/fips.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:59.358 14:58:32 -- fips/fips.sh@89 -- # check_openssl_version 00:17:59.358 14:58:32 -- fips/fips.sh@83 -- # local target=3.0.0 00:17:59.358 14:58:32 -- fips/fips.sh@85 -- # openssl version 00:17:59.358 14:58:32 -- fips/fips.sh@85 -- # awk '{print $2}' 00:17:59.358 14:58:32 -- fips/fips.sh@85 -- # ge 3.1.1 3.0.0 00:17:59.358 14:58:32 -- scripts/common.sh@375 -- # cmp_versions 3.1.1 '>=' 3.0.0 00:17:59.358 14:58:32 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:17:59.358 14:58:32 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:17:59.358 14:58:32 -- scripts/common.sh@335 -- # IFS=.-: 00:17:59.358 14:58:32 -- scripts/common.sh@335 -- # read -ra ver1 00:17:59.358 14:58:32 -- scripts/common.sh@336 -- # IFS=.-: 00:17:59.358 14:58:32 -- scripts/common.sh@336 -- # read -ra ver2 00:17:59.358 14:58:32 -- scripts/common.sh@337 -- # local 'op=>=' 00:17:59.358 14:58:32 -- scripts/common.sh@339 -- # ver1_l=3 00:17:59.358 14:58:32 -- scripts/common.sh@340 -- # ver2_l=3 00:17:59.358 14:58:32 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:17:59.358 14:58:32 -- scripts/common.sh@343 -- # case "$op" in 00:17:59.358 14:58:32 -- scripts/common.sh@347 -- # : 1 00:17:59.358 14:58:32 -- scripts/common.sh@363 -- # (( v = 0 )) 00:17:59.358 14:58:32 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:59.358 14:58:32 -- scripts/common.sh@364 -- # decimal 3 00:17:59.358 14:58:32 -- scripts/common.sh@352 -- # local d=3 00:17:59.358 14:58:32 -- scripts/common.sh@353 -- # [[ 3 =~ ^[0-9]+$ ]] 00:17:59.358 14:58:32 -- scripts/common.sh@354 -- # echo 3 00:17:59.358 14:58:32 -- scripts/common.sh@364 -- # ver1[v]=3 00:17:59.358 14:58:32 -- scripts/common.sh@365 -- # decimal 3 00:17:59.358 14:58:32 -- scripts/common.sh@352 -- # local d=3 00:17:59.358 14:58:32 -- scripts/common.sh@353 -- # [[ 3 =~ ^[0-9]+$ ]] 00:17:59.358 14:58:32 -- scripts/common.sh@354 -- # echo 3 00:17:59.358 14:58:32 -- scripts/common.sh@365 -- # ver2[v]=3 00:17:59.358 14:58:32 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:17:59.358 14:58:32 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:17:59.358 14:58:32 -- scripts/common.sh@363 -- # (( v++ )) 00:17:59.358 14:58:32 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:59.358 14:58:32 -- scripts/common.sh@364 -- # decimal 1 00:17:59.358 14:58:32 -- scripts/common.sh@352 -- # local d=1 00:17:59.358 14:58:32 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:59.358 14:58:32 -- scripts/common.sh@354 -- # echo 1 00:17:59.358 14:58:32 -- scripts/common.sh@364 -- # ver1[v]=1 00:17:59.358 14:58:32 -- scripts/common.sh@365 -- # decimal 0 00:17:59.358 14:58:32 -- scripts/common.sh@352 -- # local d=0 00:17:59.358 14:58:32 -- scripts/common.sh@353 -- # [[ 0 =~ ^[0-9]+$ ]] 00:17:59.358 14:58:32 -- scripts/common.sh@354 -- # echo 0 00:17:59.358 14:58:32 -- scripts/common.sh@365 -- # ver2[v]=0 00:17:59.358 14:58:32 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:17:59.358 14:58:32 -- scripts/common.sh@366 -- # return 0 00:17:59.358 14:58:32 -- fips/fips.sh@95 -- # openssl info -modulesdir 00:17:59.617 14:58:32 -- fips/fips.sh@95 -- # [[ ! -f /usr/lib64/ossl-modules/fips.so ]] 00:17:59.617 14:58:32 -- fips/fips.sh@100 -- # openssl fipsinstall -help 00:17:59.617 14:58:32 -- fips/fips.sh@100 -- # warn='This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode' 00:17:59.617 14:58:32 -- fips/fips.sh@101 -- # [[ This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode == \T\h\i\s\ \c\o\m\m\a\n\d\ \i\s\ \n\o\t\ \e\n\a\b\l\e\d* ]] 00:17:59.617 14:58:32 -- fips/fips.sh@104 -- # export callback=build_openssl_config 00:17:59.617 14:58:32 -- fips/fips.sh@104 -- # callback=build_openssl_config 00:17:59.617 14:58:32 -- fips/fips.sh@113 -- # build_openssl_config 00:17:59.617 14:58:32 -- fips/fips.sh@37 -- # cat 00:17:59.617 14:58:32 -- fips/fips.sh@57 -- # [[ ! -t 0 ]] 00:17:59.617 14:58:32 -- fips/fips.sh@58 -- # cat - 00:17:59.617 14:58:32 -- fips/fips.sh@114 -- # export OPENSSL_CONF=spdk_fips.conf 00:17:59.617 14:58:32 -- fips/fips.sh@114 -- # OPENSSL_CONF=spdk_fips.conf 00:17:59.617 14:58:32 -- fips/fips.sh@116 -- # mapfile -t providers 00:17:59.617 14:58:32 -- fips/fips.sh@116 -- # grep name 00:17:59.617 14:58:32 -- fips/fips.sh@116 -- # openssl list -providers 00:17:59.617 14:58:32 -- fips/fips.sh@120 -- # (( 2 != 2 )) 00:17:59.617 14:58:32 -- fips/fips.sh@120 -- # [[ name: openssl base provider != *base* ]] 00:17:59.617 14:58:32 -- fips/fips.sh@120 -- # [[ name: red hat enterprise linux 9 - openssl fips provider != *fips* ]] 00:17:59.617 14:58:32 -- fips/fips.sh@127 -- # NOT openssl md5 /dev/fd/62 00:17:59.617 14:58:32 -- fips/fips.sh@127 -- # : 00:17:59.617 14:58:32 -- common/autotest_common.sh@650 -- # local es=0 00:17:59.617 14:58:32 -- common/autotest_common.sh@652 -- # valid_exec_arg openssl md5 /dev/fd/62 00:17:59.617 14:58:32 -- common/autotest_common.sh@638 -- # local arg=openssl 00:17:59.617 14:58:32 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:59.617 14:58:32 -- common/autotest_common.sh@642 -- # type -t openssl 00:17:59.617 14:58:32 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:59.617 14:58:32 -- common/autotest_common.sh@644 -- # type -P openssl 00:17:59.617 14:58:32 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:59.617 14:58:32 -- common/autotest_common.sh@644 -- # arg=/usr/bin/openssl 00:17:59.617 14:58:32 -- common/autotest_common.sh@644 -- # [[ -x /usr/bin/openssl ]] 00:17:59.617 14:58:32 -- common/autotest_common.sh@653 -- # openssl md5 /dev/fd/62 00:17:59.617 Error setting digest 00:17:59.617 40724E4B307F0000:error:0308010C:digital envelope routines:inner_evp_generic_fetch:unsupported:crypto/evp/evp_fetch.c:341:Global default library context, Algorithm (MD5 : 95), Properties () 00:17:59.617 40724E4B307F0000:error:03000086:digital envelope routines:evp_md_init_internal:initialization error:crypto/evp/digest.c:272: 00:17:59.617 14:58:32 -- common/autotest_common.sh@653 -- # es=1 00:17:59.617 14:58:32 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:17:59.617 14:58:32 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:17:59.617 14:58:32 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:17:59.617 14:58:32 -- fips/fips.sh@130 -- # nvmftestinit 00:17:59.617 14:58:32 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:17:59.617 14:58:32 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:59.617 14:58:32 -- nvmf/common.sh@436 -- # prepare_net_devs 00:17:59.617 14:58:32 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:17:59.617 14:58:32 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:17:59.617 14:58:32 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:59.617 14:58:32 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:59.617 14:58:32 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:59.617 14:58:32 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:17:59.617 14:58:32 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:17:59.617 14:58:32 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:17:59.617 14:58:32 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:17:59.617 14:58:32 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:17:59.617 14:58:32 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:17:59.617 14:58:32 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:59.617 14:58:32 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:59.617 14:58:32 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:17:59.617 14:58:32 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:17:59.617 14:58:32 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:17:59.617 14:58:32 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:17:59.617 14:58:32 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:17:59.617 14:58:32 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:59.617 14:58:32 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:17:59.617 14:58:32 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:17:59.617 14:58:32 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:17:59.617 14:58:32 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:17:59.617 14:58:32 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:17:59.617 14:58:32 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:17:59.617 Cannot find device "nvmf_tgt_br" 00:17:59.617 14:58:32 -- nvmf/common.sh@154 -- # true 00:17:59.617 14:58:32 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:17:59.617 Cannot find device "nvmf_tgt_br2" 00:17:59.617 14:58:32 -- nvmf/common.sh@155 -- # true 00:17:59.617 14:58:32 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:17:59.617 14:58:32 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:17:59.617 Cannot find device "nvmf_tgt_br" 00:17:59.617 14:58:32 -- nvmf/common.sh@157 -- # true 00:17:59.617 14:58:32 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:17:59.617 Cannot find device "nvmf_tgt_br2" 00:17:59.617 14:58:32 -- nvmf/common.sh@158 -- # true 00:17:59.617 14:58:32 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:17:59.617 14:58:32 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:17:59.617 14:58:32 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:17:59.617 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:59.617 14:58:32 -- nvmf/common.sh@161 -- # true 00:17:59.617 14:58:32 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:17:59.617 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:59.617 14:58:32 -- nvmf/common.sh@162 -- # true 00:17:59.617 14:58:32 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:17:59.617 14:58:32 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:17:59.617 14:58:32 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:17:59.617 14:58:32 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:17:59.617 14:58:32 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:17:59.876 14:58:32 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:17:59.876 14:58:32 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:17:59.876 14:58:32 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:17:59.876 14:58:32 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:17:59.876 14:58:32 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:17:59.876 14:58:32 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:17:59.876 14:58:32 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:17:59.876 14:58:32 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:17:59.876 14:58:32 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:17:59.876 14:58:32 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:17:59.876 14:58:32 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:17:59.876 14:58:32 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:17:59.876 14:58:32 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:17:59.876 14:58:32 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:17:59.876 14:58:32 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:17:59.876 14:58:32 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:17:59.876 14:58:32 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:17:59.876 14:58:32 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:17:59.876 14:58:32 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:17:59.876 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:59.876 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.086 ms 00:17:59.876 00:17:59.876 --- 10.0.0.2 ping statistics --- 00:17:59.876 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:59.876 rtt min/avg/max/mdev = 0.086/0.086/0.086/0.000 ms 00:17:59.876 14:58:32 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:17:59.876 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:17:59.876 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.039 ms 00:17:59.876 00:17:59.876 --- 10.0.0.3 ping statistics --- 00:17:59.876 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:59.876 rtt min/avg/max/mdev = 0.039/0.039/0.039/0.000 ms 00:17:59.876 14:58:32 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:17:59.876 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:59.876 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.033 ms 00:17:59.876 00:17:59.876 --- 10.0.0.1 ping statistics --- 00:17:59.876 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:59.876 rtt min/avg/max/mdev = 0.033/0.033/0.033/0.000 ms 00:17:59.876 14:58:32 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:59.876 14:58:32 -- nvmf/common.sh@421 -- # return 0 00:17:59.876 14:58:32 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:17:59.876 14:58:32 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:59.876 14:58:32 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:17:59.876 14:58:32 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:17:59.876 14:58:32 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:59.876 14:58:32 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:17:59.876 14:58:32 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:17:59.876 14:58:32 -- fips/fips.sh@131 -- # nvmfappstart -m 0x2 00:17:59.876 14:58:32 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:17:59.876 14:58:32 -- common/autotest_common.sh@722 -- # xtrace_disable 00:17:59.876 14:58:32 -- common/autotest_common.sh@10 -- # set +x 00:17:59.876 14:58:32 -- nvmf/common.sh@469 -- # nvmfpid=90100 00:17:59.876 14:58:32 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:17:59.876 14:58:32 -- nvmf/common.sh@470 -- # waitforlisten 90100 00:17:59.876 14:58:32 -- common/autotest_common.sh@829 -- # '[' -z 90100 ']' 00:17:59.876 14:58:32 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:59.876 14:58:32 -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:59.876 14:58:32 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:59.876 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:59.876 14:58:32 -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:59.876 14:58:32 -- common/autotest_common.sh@10 -- # set +x 00:18:00.135 [2024-12-01 14:58:32.990662] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:18:00.135 [2024-12-01 14:58:32.991253] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:00.135 [2024-12-01 14:58:33.128248] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:00.135 [2024-12-01 14:58:33.203580] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:18:00.135 [2024-12-01 14:58:33.203722] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:00.135 [2024-12-01 14:58:33.203735] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:00.135 [2024-12-01 14:58:33.203743] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:00.135 [2024-12-01 14:58:33.203808] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:18:01.070 14:58:33 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:01.070 14:58:33 -- common/autotest_common.sh@862 -- # return 0 00:18:01.070 14:58:33 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:18:01.070 14:58:33 -- common/autotest_common.sh@728 -- # xtrace_disable 00:18:01.070 14:58:33 -- common/autotest_common.sh@10 -- # set +x 00:18:01.070 14:58:34 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:01.070 14:58:34 -- fips/fips.sh@133 -- # trap cleanup EXIT 00:18:01.070 14:58:34 -- fips/fips.sh@136 -- # key=NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:18:01.070 14:58:34 -- fips/fips.sh@137 -- # key_path=/home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 00:18:01.070 14:58:34 -- fips/fips.sh@138 -- # echo -n NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:18:01.070 14:58:34 -- fips/fips.sh@139 -- # chmod 0600 /home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 00:18:01.070 14:58:34 -- fips/fips.sh@141 -- # setup_nvmf_tgt_conf /home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 00:18:01.070 14:58:34 -- fips/fips.sh@22 -- # local key=/home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 00:18:01.070 14:58:34 -- fips/fips.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:18:01.329 [2024-12-01 14:58:34.225710] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:01.329 [2024-12-01 14:58:34.241664] tcp.c: 914:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:18:01.329 [2024-12-01 14:58:34.241897] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:01.329 malloc0 00:18:01.329 14:58:34 -- fips/fips.sh@144 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:18:01.329 14:58:34 -- fips/fips.sh@147 -- # bdevperf_pid=90153 00:18:01.329 14:58:34 -- fips/fips.sh@145 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:18:01.329 14:58:34 -- fips/fips.sh@148 -- # waitforlisten 90153 /var/tmp/bdevperf.sock 00:18:01.329 14:58:34 -- common/autotest_common.sh@829 -- # '[' -z 90153 ']' 00:18:01.329 14:58:34 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:01.329 14:58:34 -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:01.329 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:01.329 14:58:34 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:01.329 14:58:34 -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:01.329 14:58:34 -- common/autotest_common.sh@10 -- # set +x 00:18:01.329 [2024-12-01 14:58:34.385107] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:18:01.329 [2024-12-01 14:58:34.385231] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90153 ] 00:18:01.586 [2024-12-01 14:58:34.526778] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:01.586 [2024-12-01 14:58:34.596796] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:18:02.524 14:58:35 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:02.524 14:58:35 -- common/autotest_common.sh@862 -- # return 0 00:18:02.524 14:58:35 -- fips/fips.sh@150 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 00:18:02.524 [2024-12-01 14:58:35.521832] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:02.524 TLSTESTn1 00:18:02.524 14:58:35 -- fips/fips.sh@154 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:18:02.782 Running I/O for 10 seconds... 00:18:12.757 00:18:12.757 Latency(us) 00:18:12.757 [2024-12-01T14:58:45.872Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:12.757 [2024-12-01T14:58:45.872Z] Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:18:12.757 Verification LBA range: start 0x0 length 0x2000 00:18:12.757 TLSTESTn1 : 10.01 6346.49 24.79 0.00 0.00 20137.63 5242.88 18826.71 00:18:12.757 [2024-12-01T14:58:45.872Z] =================================================================================================================== 00:18:12.757 [2024-12-01T14:58:45.872Z] Total : 6346.49 24.79 0.00 0.00 20137.63 5242.88 18826.71 00:18:12.757 0 00:18:12.757 14:58:45 -- fips/fips.sh@1 -- # cleanup 00:18:12.757 14:58:45 -- fips/fips.sh@15 -- # process_shm --id 0 00:18:12.757 14:58:45 -- common/autotest_common.sh@806 -- # type=--id 00:18:12.757 14:58:45 -- common/autotest_common.sh@807 -- # id=0 00:18:12.757 14:58:45 -- common/autotest_common.sh@808 -- # '[' --id = --pid ']' 00:18:12.757 14:58:45 -- common/autotest_common.sh@812 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:18:12.757 14:58:45 -- common/autotest_common.sh@812 -- # shm_files=nvmf_trace.0 00:18:12.757 14:58:45 -- common/autotest_common.sh@814 -- # [[ -z nvmf_trace.0 ]] 00:18:12.757 14:58:45 -- common/autotest_common.sh@818 -- # for n in $shm_files 00:18:12.757 14:58:45 -- common/autotest_common.sh@819 -- # tar -C /dev/shm/ -cvzf /home/vagrant/spdk_repo/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:18:12.757 nvmf_trace.0 00:18:12.757 14:58:45 -- common/autotest_common.sh@821 -- # return 0 00:18:12.757 14:58:45 -- fips/fips.sh@16 -- # killprocess 90153 00:18:12.757 14:58:45 -- common/autotest_common.sh@936 -- # '[' -z 90153 ']' 00:18:12.757 14:58:45 -- common/autotest_common.sh@940 -- # kill -0 90153 00:18:12.757 14:58:45 -- common/autotest_common.sh@941 -- # uname 00:18:12.757 14:58:45 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:18:12.757 14:58:45 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 90153 00:18:12.758 14:58:45 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:18:12.758 14:58:45 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:18:12.758 killing process with pid 90153 00:18:12.758 14:58:45 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 90153' 00:18:12.758 Received shutdown signal, test time was about 10.000000 seconds 00:18:12.758 00:18:12.758 Latency(us) 00:18:12.758 [2024-12-01T14:58:45.873Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:12.758 [2024-12-01T14:58:45.873Z] =================================================================================================================== 00:18:12.758 [2024-12-01T14:58:45.873Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:12.758 14:58:45 -- common/autotest_common.sh@955 -- # kill 90153 00:18:12.758 14:58:45 -- common/autotest_common.sh@960 -- # wait 90153 00:18:13.016 14:58:46 -- fips/fips.sh@17 -- # nvmftestfini 00:18:13.016 14:58:46 -- nvmf/common.sh@476 -- # nvmfcleanup 00:18:13.016 14:58:46 -- nvmf/common.sh@116 -- # sync 00:18:13.016 14:58:46 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:18:13.016 14:58:46 -- nvmf/common.sh@119 -- # set +e 00:18:13.016 14:58:46 -- nvmf/common.sh@120 -- # for i in {1..20} 00:18:13.016 14:58:46 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:18:13.016 rmmod nvme_tcp 00:18:13.016 rmmod nvme_fabrics 00:18:13.016 rmmod nvme_keyring 00:18:13.274 14:58:46 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:18:13.274 14:58:46 -- nvmf/common.sh@123 -- # set -e 00:18:13.274 14:58:46 -- nvmf/common.sh@124 -- # return 0 00:18:13.274 14:58:46 -- nvmf/common.sh@477 -- # '[' -n 90100 ']' 00:18:13.274 14:58:46 -- nvmf/common.sh@478 -- # killprocess 90100 00:18:13.274 14:58:46 -- common/autotest_common.sh@936 -- # '[' -z 90100 ']' 00:18:13.274 14:58:46 -- common/autotest_common.sh@940 -- # kill -0 90100 00:18:13.274 14:58:46 -- common/autotest_common.sh@941 -- # uname 00:18:13.274 14:58:46 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:18:13.274 14:58:46 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 90100 00:18:13.274 14:58:46 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:18:13.274 killing process with pid 90100 00:18:13.274 14:58:46 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:18:13.274 14:58:46 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 90100' 00:18:13.274 14:58:46 -- common/autotest_common.sh@955 -- # kill 90100 00:18:13.274 14:58:46 -- common/autotest_common.sh@960 -- # wait 90100 00:18:13.533 14:58:46 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:18:13.533 14:58:46 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:18:13.533 14:58:46 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:18:13.533 14:58:46 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:18:13.533 14:58:46 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:18:13.533 14:58:46 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:13.533 14:58:46 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:13.533 14:58:46 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:13.533 14:58:46 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:18:13.533 14:58:46 -- fips/fips.sh@18 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 00:18:13.533 00:18:13.533 real 0m14.247s 00:18:13.533 user 0m18.036s 00:18:13.533 sys 0m6.616s 00:18:13.533 14:58:46 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:18:13.533 14:58:46 -- common/autotest_common.sh@10 -- # set +x 00:18:13.533 ************************************ 00:18:13.533 END TEST nvmf_fips 00:18:13.533 ************************************ 00:18:13.533 14:58:46 -- nvmf/nvmf.sh@63 -- # '[' 1 -eq 1 ']' 00:18:13.533 14:58:46 -- nvmf/nvmf.sh@64 -- # run_test nvmf_fuzz /home/vagrant/spdk_repo/spdk/test/nvmf/target/fabrics_fuzz.sh --transport=tcp 00:18:13.533 14:58:46 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:18:13.533 14:58:46 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:18:13.533 14:58:46 -- common/autotest_common.sh@10 -- # set +x 00:18:13.533 ************************************ 00:18:13.533 START TEST nvmf_fuzz 00:18:13.533 ************************************ 00:18:13.533 14:58:46 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/fabrics_fuzz.sh --transport=tcp 00:18:13.533 * Looking for test storage... 00:18:13.533 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:18:13.533 14:58:46 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:18:13.533 14:58:46 -- common/autotest_common.sh@1690 -- # lcov --version 00:18:13.534 14:58:46 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:18:13.793 14:58:46 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:18:13.793 14:58:46 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:18:13.793 14:58:46 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:18:13.793 14:58:46 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:18:13.793 14:58:46 -- scripts/common.sh@335 -- # IFS=.-: 00:18:13.793 14:58:46 -- scripts/common.sh@335 -- # read -ra ver1 00:18:13.793 14:58:46 -- scripts/common.sh@336 -- # IFS=.-: 00:18:13.793 14:58:46 -- scripts/common.sh@336 -- # read -ra ver2 00:18:13.793 14:58:46 -- scripts/common.sh@337 -- # local 'op=<' 00:18:13.793 14:58:46 -- scripts/common.sh@339 -- # ver1_l=2 00:18:13.793 14:58:46 -- scripts/common.sh@340 -- # ver2_l=1 00:18:13.793 14:58:46 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:18:13.793 14:58:46 -- scripts/common.sh@343 -- # case "$op" in 00:18:13.793 14:58:46 -- scripts/common.sh@344 -- # : 1 00:18:13.793 14:58:46 -- scripts/common.sh@363 -- # (( v = 0 )) 00:18:13.793 14:58:46 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:13.793 14:58:46 -- scripts/common.sh@364 -- # decimal 1 00:18:13.793 14:58:46 -- scripts/common.sh@352 -- # local d=1 00:18:13.793 14:58:46 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:13.793 14:58:46 -- scripts/common.sh@354 -- # echo 1 00:18:13.793 14:58:46 -- scripts/common.sh@364 -- # ver1[v]=1 00:18:13.793 14:58:46 -- scripts/common.sh@365 -- # decimal 2 00:18:13.793 14:58:46 -- scripts/common.sh@352 -- # local d=2 00:18:13.793 14:58:46 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:13.793 14:58:46 -- scripts/common.sh@354 -- # echo 2 00:18:13.793 14:58:46 -- scripts/common.sh@365 -- # ver2[v]=2 00:18:13.793 14:58:46 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:18:13.793 14:58:46 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:18:13.793 14:58:46 -- scripts/common.sh@367 -- # return 0 00:18:13.793 14:58:46 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:13.793 14:58:46 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:18:13.793 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:13.793 --rc genhtml_branch_coverage=1 00:18:13.793 --rc genhtml_function_coverage=1 00:18:13.793 --rc genhtml_legend=1 00:18:13.793 --rc geninfo_all_blocks=1 00:18:13.793 --rc geninfo_unexecuted_blocks=1 00:18:13.793 00:18:13.793 ' 00:18:13.793 14:58:46 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:18:13.793 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:13.793 --rc genhtml_branch_coverage=1 00:18:13.793 --rc genhtml_function_coverage=1 00:18:13.793 --rc genhtml_legend=1 00:18:13.793 --rc geninfo_all_blocks=1 00:18:13.793 --rc geninfo_unexecuted_blocks=1 00:18:13.793 00:18:13.793 ' 00:18:13.793 14:58:46 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:18:13.793 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:13.793 --rc genhtml_branch_coverage=1 00:18:13.793 --rc genhtml_function_coverage=1 00:18:13.793 --rc genhtml_legend=1 00:18:13.793 --rc geninfo_all_blocks=1 00:18:13.793 --rc geninfo_unexecuted_blocks=1 00:18:13.793 00:18:13.793 ' 00:18:13.793 14:58:46 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:18:13.793 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:13.794 --rc genhtml_branch_coverage=1 00:18:13.794 --rc genhtml_function_coverage=1 00:18:13.794 --rc genhtml_legend=1 00:18:13.794 --rc geninfo_all_blocks=1 00:18:13.794 --rc geninfo_unexecuted_blocks=1 00:18:13.794 00:18:13.794 ' 00:18:13.794 14:58:46 -- target/fabrics_fuzz.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:18:13.794 14:58:46 -- nvmf/common.sh@7 -- # uname -s 00:18:13.794 14:58:46 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:13.794 14:58:46 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:13.794 14:58:46 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:13.794 14:58:46 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:13.794 14:58:46 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:13.794 14:58:46 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:13.794 14:58:46 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:13.794 14:58:46 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:13.794 14:58:46 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:13.794 14:58:46 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:13.794 14:58:46 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:2d843004-a791-47f3-8dd7-3d04462c368b 00:18:13.794 14:58:46 -- nvmf/common.sh@18 -- # NVME_HOSTID=2d843004-a791-47f3-8dd7-3d04462c368b 00:18:13.794 14:58:46 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:13.794 14:58:46 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:13.794 14:58:46 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:18:13.794 14:58:46 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:18:13.794 14:58:46 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:13.794 14:58:46 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:13.794 14:58:46 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:13.794 14:58:46 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:13.794 14:58:46 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:13.794 14:58:46 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:13.794 14:58:46 -- paths/export.sh@5 -- # export PATH 00:18:13.794 14:58:46 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:13.794 14:58:46 -- nvmf/common.sh@46 -- # : 0 00:18:13.794 14:58:46 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:18:13.794 14:58:46 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:18:13.794 14:58:46 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:18:13.794 14:58:46 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:13.794 14:58:46 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:13.794 14:58:46 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:18:13.794 14:58:46 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:18:13.794 14:58:46 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:18:13.794 14:58:46 -- target/fabrics_fuzz.sh@11 -- # nvmftestinit 00:18:13.794 14:58:46 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:18:13.794 14:58:46 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:13.794 14:58:46 -- nvmf/common.sh@436 -- # prepare_net_devs 00:18:13.794 14:58:46 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:18:13.794 14:58:46 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:18:13.794 14:58:46 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:13.794 14:58:46 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:13.794 14:58:46 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:13.794 14:58:46 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:18:13.794 14:58:46 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:18:13.794 14:58:46 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:18:13.794 14:58:46 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:18:13.794 14:58:46 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:18:13.794 14:58:46 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:18:13.794 14:58:46 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:13.794 14:58:46 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:13.794 14:58:46 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:18:13.794 14:58:46 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:18:13.794 14:58:46 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:18:13.794 14:58:46 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:18:13.794 14:58:46 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:18:13.794 14:58:46 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:13.794 14:58:46 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:18:13.794 14:58:46 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:18:13.794 14:58:46 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:18:13.794 14:58:46 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:18:13.794 14:58:46 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:18:13.794 14:58:46 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:18:13.794 Cannot find device "nvmf_tgt_br" 00:18:13.794 14:58:46 -- nvmf/common.sh@154 -- # true 00:18:13.794 14:58:46 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:18:13.794 Cannot find device "nvmf_tgt_br2" 00:18:13.794 14:58:46 -- nvmf/common.sh@155 -- # true 00:18:13.794 14:58:46 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:18:13.794 14:58:46 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:18:13.794 Cannot find device "nvmf_tgt_br" 00:18:13.794 14:58:46 -- nvmf/common.sh@157 -- # true 00:18:13.794 14:58:46 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:18:13.794 Cannot find device "nvmf_tgt_br2" 00:18:13.794 14:58:46 -- nvmf/common.sh@158 -- # true 00:18:13.794 14:58:46 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:18:13.794 14:58:46 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:18:13.794 14:58:46 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:18:13.794 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:13.794 14:58:46 -- nvmf/common.sh@161 -- # true 00:18:13.794 14:58:46 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:18:14.053 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:14.053 14:58:46 -- nvmf/common.sh@162 -- # true 00:18:14.053 14:58:46 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:18:14.053 14:58:46 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:18:14.053 14:58:46 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:18:14.053 14:58:46 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:18:14.053 14:58:46 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:18:14.053 14:58:46 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:18:14.053 14:58:46 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:18:14.053 14:58:46 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:18:14.053 14:58:46 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:18:14.053 14:58:46 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:18:14.053 14:58:46 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:18:14.053 14:58:46 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:18:14.053 14:58:46 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:18:14.053 14:58:47 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:18:14.053 14:58:47 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:18:14.053 14:58:47 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:18:14.053 14:58:47 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:18:14.053 14:58:47 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:18:14.053 14:58:47 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:18:14.053 14:58:47 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:18:14.053 14:58:47 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:18:14.053 14:58:47 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:18:14.053 14:58:47 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:18:14.053 14:58:47 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:18:14.053 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:14.053 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.103 ms 00:18:14.053 00:18:14.053 --- 10.0.0.2 ping statistics --- 00:18:14.053 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:14.053 rtt min/avg/max/mdev = 0.103/0.103/0.103/0.000 ms 00:18:14.053 14:58:47 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:18:14.053 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:18:14.054 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.051 ms 00:18:14.054 00:18:14.054 --- 10.0.0.3 ping statistics --- 00:18:14.054 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:14.054 rtt min/avg/max/mdev = 0.051/0.051/0.051/0.000 ms 00:18:14.054 14:58:47 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:18:14.054 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:14.054 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.022 ms 00:18:14.054 00:18:14.054 --- 10.0.0.1 ping statistics --- 00:18:14.054 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:14.054 rtt min/avg/max/mdev = 0.022/0.022/0.022/0.000 ms 00:18:14.054 14:58:47 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:14.054 14:58:47 -- nvmf/common.sh@421 -- # return 0 00:18:14.054 14:58:47 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:18:14.054 14:58:47 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:14.054 14:58:47 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:18:14.054 14:58:47 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:18:14.054 14:58:47 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:14.054 14:58:47 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:18:14.054 14:58:47 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:18:14.054 14:58:47 -- target/fabrics_fuzz.sh@14 -- # nvmfpid=90505 00:18:14.054 14:58:47 -- target/fabrics_fuzz.sh@16 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $nvmfpid; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:18:14.054 14:58:47 -- target/fabrics_fuzz.sh@18 -- # waitforlisten 90505 00:18:14.054 14:58:47 -- target/fabrics_fuzz.sh@13 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:18:14.054 14:58:47 -- common/autotest_common.sh@829 -- # '[' -z 90505 ']' 00:18:14.054 14:58:47 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:14.054 14:58:47 -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:14.054 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:14.054 14:58:47 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:14.054 14:58:47 -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:14.054 14:58:47 -- common/autotest_common.sh@10 -- # set +x 00:18:15.431 14:58:48 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:15.431 14:58:48 -- common/autotest_common.sh@862 -- # return 0 00:18:15.431 14:58:48 -- target/fabrics_fuzz.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:18:15.431 14:58:48 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:15.431 14:58:48 -- common/autotest_common.sh@10 -- # set +x 00:18:15.431 14:58:48 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:15.431 14:58:48 -- target/fabrics_fuzz.sh@21 -- # rpc_cmd bdev_malloc_create -b Malloc0 64 512 00:18:15.431 14:58:48 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:15.431 14:58:48 -- common/autotest_common.sh@10 -- # set +x 00:18:15.431 Malloc0 00:18:15.431 14:58:48 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:15.431 14:58:48 -- target/fabrics_fuzz.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:18:15.431 14:58:48 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:15.431 14:58:48 -- common/autotest_common.sh@10 -- # set +x 00:18:15.431 14:58:48 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:15.431 14:58:48 -- target/fabrics_fuzz.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:18:15.431 14:58:48 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:15.431 14:58:48 -- common/autotest_common.sh@10 -- # set +x 00:18:15.431 14:58:48 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:15.431 14:58:48 -- target/fabrics_fuzz.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:15.431 14:58:48 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:15.431 14:58:48 -- common/autotest_common.sh@10 -- # set +x 00:18:15.431 14:58:48 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:15.431 14:58:48 -- target/fabrics_fuzz.sh@27 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420' 00:18:15.431 14:58:48 -- target/fabrics_fuzz.sh@30 -- # /home/vagrant/spdk_repo/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -r /var/tmp/nvme_fuzz -t 30 -S 123456 -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420' -N -a 00:18:15.690 Shutting down the fuzz application 00:18:15.690 14:58:48 -- target/fabrics_fuzz.sh@32 -- # /home/vagrant/spdk_repo/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -r /var/tmp/nvme_fuzz -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420' -j /home/vagrant/spdk_repo/spdk/test/app/fuzz/nvme_fuzz/example.json -a 00:18:15.949 Shutting down the fuzz application 00:18:15.949 14:58:48 -- target/fabrics_fuzz.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:18:15.949 14:58:48 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:15.949 14:58:48 -- common/autotest_common.sh@10 -- # set +x 00:18:15.949 14:58:48 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:15.949 14:58:48 -- target/fabrics_fuzz.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:18:15.949 14:58:48 -- target/fabrics_fuzz.sh@38 -- # nvmftestfini 00:18:15.949 14:58:48 -- nvmf/common.sh@476 -- # nvmfcleanup 00:18:15.949 14:58:48 -- nvmf/common.sh@116 -- # sync 00:18:15.949 14:58:49 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:18:15.949 14:58:49 -- nvmf/common.sh@119 -- # set +e 00:18:15.949 14:58:49 -- nvmf/common.sh@120 -- # for i in {1..20} 00:18:15.949 14:58:49 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:18:15.949 rmmod nvme_tcp 00:18:15.949 rmmod nvme_fabrics 00:18:15.949 rmmod nvme_keyring 00:18:15.949 14:58:49 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:18:15.949 14:58:49 -- nvmf/common.sh@123 -- # set -e 00:18:15.949 14:58:49 -- nvmf/common.sh@124 -- # return 0 00:18:15.949 14:58:49 -- nvmf/common.sh@477 -- # '[' -n 90505 ']' 00:18:15.949 14:58:49 -- nvmf/common.sh@478 -- # killprocess 90505 00:18:15.949 14:58:49 -- common/autotest_common.sh@936 -- # '[' -z 90505 ']' 00:18:15.949 14:58:49 -- common/autotest_common.sh@940 -- # kill -0 90505 00:18:15.949 14:58:49 -- common/autotest_common.sh@941 -- # uname 00:18:15.949 14:58:49 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:18:15.949 14:58:49 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 90505 00:18:16.208 killing process with pid 90505 00:18:16.208 14:58:49 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:18:16.208 14:58:49 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:18:16.208 14:58:49 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 90505' 00:18:16.208 14:58:49 -- common/autotest_common.sh@955 -- # kill 90505 00:18:16.208 14:58:49 -- common/autotest_common.sh@960 -- # wait 90505 00:18:16.208 14:58:49 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:18:16.208 14:58:49 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:18:16.208 14:58:49 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:18:16.208 14:58:49 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:18:16.208 14:58:49 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:18:16.208 14:58:49 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:16.208 14:58:49 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:16.208 14:58:49 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:16.467 14:58:49 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:18:16.467 14:58:49 -- target/fabrics_fuzz.sh@39 -- # rm /home/vagrant/spdk_repo/spdk/../output/nvmf_fuzz_logs1.txt /home/vagrant/spdk_repo/spdk/../output/nvmf_fuzz_logs2.txt 00:18:16.467 00:18:16.467 real 0m2.796s 00:18:16.467 user 0m2.800s 00:18:16.467 sys 0m0.770s 00:18:16.467 14:58:49 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:18:16.467 ************************************ 00:18:16.467 END TEST nvmf_fuzz 00:18:16.467 14:58:49 -- common/autotest_common.sh@10 -- # set +x 00:18:16.467 ************************************ 00:18:16.467 14:58:49 -- nvmf/nvmf.sh@65 -- # run_test nvmf_multiconnection /home/vagrant/spdk_repo/spdk/test/nvmf/target/multiconnection.sh --transport=tcp 00:18:16.467 14:58:49 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:18:16.467 14:58:49 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:18:16.467 14:58:49 -- common/autotest_common.sh@10 -- # set +x 00:18:16.467 ************************************ 00:18:16.467 START TEST nvmf_multiconnection 00:18:16.467 ************************************ 00:18:16.467 14:58:49 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multiconnection.sh --transport=tcp 00:18:16.467 * Looking for test storage... 00:18:16.467 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:18:16.467 14:58:49 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:18:16.467 14:58:49 -- common/autotest_common.sh@1690 -- # lcov --version 00:18:16.467 14:58:49 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:18:16.467 14:58:49 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:18:16.467 14:58:49 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:18:16.467 14:58:49 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:18:16.467 14:58:49 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:18:16.467 14:58:49 -- scripts/common.sh@335 -- # IFS=.-: 00:18:16.467 14:58:49 -- scripts/common.sh@335 -- # read -ra ver1 00:18:16.467 14:58:49 -- scripts/common.sh@336 -- # IFS=.-: 00:18:16.467 14:58:49 -- scripts/common.sh@336 -- # read -ra ver2 00:18:16.467 14:58:49 -- scripts/common.sh@337 -- # local 'op=<' 00:18:16.467 14:58:49 -- scripts/common.sh@339 -- # ver1_l=2 00:18:16.467 14:58:49 -- scripts/common.sh@340 -- # ver2_l=1 00:18:16.467 14:58:49 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:18:16.467 14:58:49 -- scripts/common.sh@343 -- # case "$op" in 00:18:16.467 14:58:49 -- scripts/common.sh@344 -- # : 1 00:18:16.467 14:58:49 -- scripts/common.sh@363 -- # (( v = 0 )) 00:18:16.467 14:58:49 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:16.467 14:58:49 -- scripts/common.sh@364 -- # decimal 1 00:18:16.467 14:58:49 -- scripts/common.sh@352 -- # local d=1 00:18:16.467 14:58:49 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:16.467 14:58:49 -- scripts/common.sh@354 -- # echo 1 00:18:16.467 14:58:49 -- scripts/common.sh@364 -- # ver1[v]=1 00:18:16.467 14:58:49 -- scripts/common.sh@365 -- # decimal 2 00:18:16.467 14:58:49 -- scripts/common.sh@352 -- # local d=2 00:18:16.467 14:58:49 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:16.467 14:58:49 -- scripts/common.sh@354 -- # echo 2 00:18:16.467 14:58:49 -- scripts/common.sh@365 -- # ver2[v]=2 00:18:16.467 14:58:49 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:18:16.467 14:58:49 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:18:16.467 14:58:49 -- scripts/common.sh@367 -- # return 0 00:18:16.467 14:58:49 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:16.467 14:58:49 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:18:16.467 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:16.467 --rc genhtml_branch_coverage=1 00:18:16.467 --rc genhtml_function_coverage=1 00:18:16.467 --rc genhtml_legend=1 00:18:16.467 --rc geninfo_all_blocks=1 00:18:16.467 --rc geninfo_unexecuted_blocks=1 00:18:16.467 00:18:16.467 ' 00:18:16.467 14:58:49 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:18:16.467 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:16.467 --rc genhtml_branch_coverage=1 00:18:16.467 --rc genhtml_function_coverage=1 00:18:16.467 --rc genhtml_legend=1 00:18:16.467 --rc geninfo_all_blocks=1 00:18:16.467 --rc geninfo_unexecuted_blocks=1 00:18:16.467 00:18:16.467 ' 00:18:16.467 14:58:49 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:18:16.467 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:16.467 --rc genhtml_branch_coverage=1 00:18:16.467 --rc genhtml_function_coverage=1 00:18:16.467 --rc genhtml_legend=1 00:18:16.467 --rc geninfo_all_blocks=1 00:18:16.467 --rc geninfo_unexecuted_blocks=1 00:18:16.467 00:18:16.467 ' 00:18:16.467 14:58:49 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:18:16.467 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:16.467 --rc genhtml_branch_coverage=1 00:18:16.467 --rc genhtml_function_coverage=1 00:18:16.467 --rc genhtml_legend=1 00:18:16.467 --rc geninfo_all_blocks=1 00:18:16.467 --rc geninfo_unexecuted_blocks=1 00:18:16.467 00:18:16.467 ' 00:18:16.467 14:58:49 -- target/multiconnection.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:18:16.467 14:58:49 -- nvmf/common.sh@7 -- # uname -s 00:18:16.467 14:58:49 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:16.467 14:58:49 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:16.467 14:58:49 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:16.467 14:58:49 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:16.467 14:58:49 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:16.467 14:58:49 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:16.467 14:58:49 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:16.467 14:58:49 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:16.467 14:58:49 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:16.726 14:58:49 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:16.726 14:58:49 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:2d843004-a791-47f3-8dd7-3d04462c368b 00:18:16.726 14:58:49 -- nvmf/common.sh@18 -- # NVME_HOSTID=2d843004-a791-47f3-8dd7-3d04462c368b 00:18:16.726 14:58:49 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:16.726 14:58:49 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:16.726 14:58:49 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:18:16.726 14:58:49 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:18:16.726 14:58:49 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:16.726 14:58:49 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:16.726 14:58:49 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:16.726 14:58:49 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:16.726 14:58:49 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:16.726 14:58:49 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:16.726 14:58:49 -- paths/export.sh@5 -- # export PATH 00:18:16.726 14:58:49 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:16.726 14:58:49 -- nvmf/common.sh@46 -- # : 0 00:18:16.726 14:58:49 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:18:16.726 14:58:49 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:18:16.726 14:58:49 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:18:16.726 14:58:49 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:16.726 14:58:49 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:16.726 14:58:49 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:18:16.726 14:58:49 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:18:16.726 14:58:49 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:18:16.726 14:58:49 -- target/multiconnection.sh@11 -- # MALLOC_BDEV_SIZE=64 00:18:16.726 14:58:49 -- target/multiconnection.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:18:16.726 14:58:49 -- target/multiconnection.sh@14 -- # NVMF_SUBSYS=11 00:18:16.726 14:58:49 -- target/multiconnection.sh@16 -- # nvmftestinit 00:18:16.726 14:58:49 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:18:16.726 14:58:49 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:16.726 14:58:49 -- nvmf/common.sh@436 -- # prepare_net_devs 00:18:16.726 14:58:49 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:18:16.726 14:58:49 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:18:16.726 14:58:49 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:16.726 14:58:49 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:16.726 14:58:49 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:16.726 14:58:49 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:18:16.726 14:58:49 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:18:16.726 14:58:49 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:18:16.726 14:58:49 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:18:16.726 14:58:49 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:18:16.726 14:58:49 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:18:16.726 14:58:49 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:16.726 14:58:49 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:16.726 14:58:49 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:18:16.726 14:58:49 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:18:16.726 14:58:49 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:18:16.726 14:58:49 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:18:16.726 14:58:49 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:18:16.726 14:58:49 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:16.726 14:58:49 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:18:16.726 14:58:49 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:18:16.726 14:58:49 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:18:16.726 14:58:49 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:18:16.726 14:58:49 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:18:16.726 14:58:49 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:18:16.726 Cannot find device "nvmf_tgt_br" 00:18:16.726 14:58:49 -- nvmf/common.sh@154 -- # true 00:18:16.726 14:58:49 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:18:16.726 Cannot find device "nvmf_tgt_br2" 00:18:16.726 14:58:49 -- nvmf/common.sh@155 -- # true 00:18:16.727 14:58:49 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:18:16.727 14:58:49 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:18:16.727 Cannot find device "nvmf_tgt_br" 00:18:16.727 14:58:49 -- nvmf/common.sh@157 -- # true 00:18:16.727 14:58:49 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:18:16.727 Cannot find device "nvmf_tgt_br2" 00:18:16.727 14:58:49 -- nvmf/common.sh@158 -- # true 00:18:16.727 14:58:49 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:18:16.727 14:58:49 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:18:16.727 14:58:49 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:18:16.727 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:16.727 14:58:49 -- nvmf/common.sh@161 -- # true 00:18:16.727 14:58:49 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:18:16.727 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:16.727 14:58:49 -- nvmf/common.sh@162 -- # true 00:18:16.727 14:58:49 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:18:16.727 14:58:49 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:18:16.727 14:58:49 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:18:16.727 14:58:49 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:18:16.727 14:58:49 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:18:16.727 14:58:49 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:18:16.727 14:58:49 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:18:16.727 14:58:49 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:18:16.727 14:58:49 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:18:16.727 14:58:49 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:18:16.727 14:58:49 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:18:16.727 14:58:49 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:18:16.727 14:58:49 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:18:16.727 14:58:49 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:18:16.727 14:58:49 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:18:16.985 14:58:49 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:18:16.985 14:58:49 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:18:16.985 14:58:49 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:18:16.985 14:58:49 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:18:16.985 14:58:49 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:18:16.985 14:58:49 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:18:16.985 14:58:49 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:18:16.985 14:58:49 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:18:16.985 14:58:49 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:18:16.985 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:16.985 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.072 ms 00:18:16.985 00:18:16.985 --- 10.0.0.2 ping statistics --- 00:18:16.985 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:16.985 rtt min/avg/max/mdev = 0.072/0.072/0.072/0.000 ms 00:18:16.985 14:58:49 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:18:16.985 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:18:16.985 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.057 ms 00:18:16.985 00:18:16.985 --- 10.0.0.3 ping statistics --- 00:18:16.985 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:16.985 rtt min/avg/max/mdev = 0.057/0.057/0.057/0.000 ms 00:18:16.985 14:58:49 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:18:16.985 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:16.985 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.022 ms 00:18:16.985 00:18:16.985 --- 10.0.0.1 ping statistics --- 00:18:16.985 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:16.985 rtt min/avg/max/mdev = 0.022/0.022/0.022/0.000 ms 00:18:16.985 14:58:49 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:16.985 14:58:49 -- nvmf/common.sh@421 -- # return 0 00:18:16.985 14:58:49 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:18:16.985 14:58:49 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:16.985 14:58:49 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:18:16.985 14:58:49 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:18:16.985 14:58:49 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:16.985 14:58:49 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:18:16.985 14:58:49 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:18:16.985 14:58:49 -- target/multiconnection.sh@17 -- # nvmfappstart -m 0xF 00:18:16.985 14:58:49 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:18:16.985 14:58:49 -- common/autotest_common.sh@722 -- # xtrace_disable 00:18:16.985 14:58:49 -- common/autotest_common.sh@10 -- # set +x 00:18:16.985 14:58:49 -- nvmf/common.sh@469 -- # nvmfpid=90723 00:18:16.986 14:58:49 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:18:16.986 14:58:49 -- nvmf/common.sh@470 -- # waitforlisten 90723 00:18:16.986 14:58:49 -- common/autotest_common.sh@829 -- # '[' -z 90723 ']' 00:18:16.986 14:58:49 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:16.986 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:16.986 14:58:49 -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:16.986 14:58:49 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:16.986 14:58:49 -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:16.986 14:58:49 -- common/autotest_common.sh@10 -- # set +x 00:18:16.986 [2024-12-01 14:58:50.021164] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:18:16.986 [2024-12-01 14:58:50.021328] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:17.245 [2024-12-01 14:58:50.159822] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:18:17.245 [2024-12-01 14:58:50.216033] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:18:17.245 [2024-12-01 14:58:50.216183] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:17.245 [2024-12-01 14:58:50.216195] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:17.245 [2024-12-01 14:58:50.216202] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:17.245 [2024-12-01 14:58:50.216351] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:18:17.245 [2024-12-01 14:58:50.216495] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:18:17.245 [2024-12-01 14:58:50.217127] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:18:17.245 [2024-12-01 14:58:50.217173] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:18.180 14:58:51 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:18.180 14:58:51 -- common/autotest_common.sh@862 -- # return 0 00:18:18.180 14:58:51 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:18:18.180 14:58:51 -- common/autotest_common.sh@728 -- # xtrace_disable 00:18:18.180 14:58:51 -- common/autotest_common.sh@10 -- # set +x 00:18:18.180 14:58:51 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:18.180 14:58:51 -- target/multiconnection.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:18:18.180 14:58:51 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:18.180 14:58:51 -- common/autotest_common.sh@10 -- # set +x 00:18:18.180 [2024-12-01 14:58:51.100598] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:18.180 14:58:51 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:18.180 14:58:51 -- target/multiconnection.sh@21 -- # seq 1 11 00:18:18.180 14:58:51 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:18.180 14:58:51 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:18:18.180 14:58:51 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:18.180 14:58:51 -- common/autotest_common.sh@10 -- # set +x 00:18:18.180 Malloc1 00:18:18.180 14:58:51 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:18.180 14:58:51 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK1 00:18:18.180 14:58:51 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:18.180 14:58:51 -- common/autotest_common.sh@10 -- # set +x 00:18:18.180 14:58:51 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:18.180 14:58:51 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:18:18.180 14:58:51 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:18.180 14:58:51 -- common/autotest_common.sh@10 -- # set +x 00:18:18.180 14:58:51 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:18.180 14:58:51 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:18.180 14:58:51 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:18.180 14:58:51 -- common/autotest_common.sh@10 -- # set +x 00:18:18.180 [2024-12-01 14:58:51.178171] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:18.180 14:58:51 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:18.180 14:58:51 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:18.180 14:58:51 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc2 00:18:18.180 14:58:51 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:18.180 14:58:51 -- common/autotest_common.sh@10 -- # set +x 00:18:18.180 Malloc2 00:18:18.180 14:58:51 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:18.180 14:58:51 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:18:18.180 14:58:51 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:18.180 14:58:51 -- common/autotest_common.sh@10 -- # set +x 00:18:18.180 14:58:51 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:18.180 14:58:51 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc2 00:18:18.180 14:58:51 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:18.180 14:58:51 -- common/autotest_common.sh@10 -- # set +x 00:18:18.180 14:58:51 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:18.180 14:58:51 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:18:18.180 14:58:51 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:18.180 14:58:51 -- common/autotest_common.sh@10 -- # set +x 00:18:18.180 14:58:51 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:18.180 14:58:51 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:18.180 14:58:51 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc3 00:18:18.180 14:58:51 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:18.180 14:58:51 -- common/autotest_common.sh@10 -- # set +x 00:18:18.180 Malloc3 00:18:18.180 14:58:51 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:18.180 14:58:51 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK3 00:18:18.180 14:58:51 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:18.180 14:58:51 -- common/autotest_common.sh@10 -- # set +x 00:18:18.180 14:58:51 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:18.180 14:58:51 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Malloc3 00:18:18.180 14:58:51 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:18.180 14:58:51 -- common/autotest_common.sh@10 -- # set +x 00:18:18.180 14:58:51 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:18.180 14:58:51 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:18:18.180 14:58:51 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:18.180 14:58:51 -- common/autotest_common.sh@10 -- # set +x 00:18:18.180 14:58:51 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:18.180 14:58:51 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:18.180 14:58:51 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc4 00:18:18.180 14:58:51 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:18.180 14:58:51 -- common/autotest_common.sh@10 -- # set +x 00:18:18.440 Malloc4 00:18:18.440 14:58:51 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:18.440 14:58:51 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK4 00:18:18.440 14:58:51 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:18.440 14:58:51 -- common/autotest_common.sh@10 -- # set +x 00:18:18.440 14:58:51 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:18.440 14:58:51 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Malloc4 00:18:18.440 14:58:51 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:18.440 14:58:51 -- common/autotest_common.sh@10 -- # set +x 00:18:18.440 14:58:51 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:18.440 14:58:51 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.2 -s 4420 00:18:18.440 14:58:51 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:18.440 14:58:51 -- common/autotest_common.sh@10 -- # set +x 00:18:18.440 14:58:51 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:18.440 14:58:51 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:18.440 14:58:51 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc5 00:18:18.440 14:58:51 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:18.440 14:58:51 -- common/autotest_common.sh@10 -- # set +x 00:18:18.440 Malloc5 00:18:18.440 14:58:51 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:18.440 14:58:51 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode5 -a -s SPDK5 00:18:18.440 14:58:51 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:18.440 14:58:51 -- common/autotest_common.sh@10 -- # set +x 00:18:18.440 14:58:51 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:18.440 14:58:51 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode5 Malloc5 00:18:18.440 14:58:51 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:18.440 14:58:51 -- common/autotest_common.sh@10 -- # set +x 00:18:18.440 14:58:51 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:18.440 14:58:51 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode5 -t tcp -a 10.0.0.2 -s 4420 00:18:18.440 14:58:51 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:18.440 14:58:51 -- common/autotest_common.sh@10 -- # set +x 00:18:18.440 14:58:51 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:18.440 14:58:51 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:18.440 14:58:51 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc6 00:18:18.440 14:58:51 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:18.440 14:58:51 -- common/autotest_common.sh@10 -- # set +x 00:18:18.440 Malloc6 00:18:18.440 14:58:51 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:18.440 14:58:51 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode6 -a -s SPDK6 00:18:18.440 14:58:51 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:18.440 14:58:51 -- common/autotest_common.sh@10 -- # set +x 00:18:18.440 14:58:51 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:18.440 14:58:51 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode6 Malloc6 00:18:18.440 14:58:51 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:18.440 14:58:51 -- common/autotest_common.sh@10 -- # set +x 00:18:18.440 14:58:51 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:18.440 14:58:51 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode6 -t tcp -a 10.0.0.2 -s 4420 00:18:18.440 14:58:51 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:18.440 14:58:51 -- common/autotest_common.sh@10 -- # set +x 00:18:18.440 14:58:51 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:18.440 14:58:51 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:18.440 14:58:51 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc7 00:18:18.440 14:58:51 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:18.440 14:58:51 -- common/autotest_common.sh@10 -- # set +x 00:18:18.440 Malloc7 00:18:18.440 14:58:51 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:18.440 14:58:51 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode7 -a -s SPDK7 00:18:18.440 14:58:51 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:18.440 14:58:51 -- common/autotest_common.sh@10 -- # set +x 00:18:18.440 14:58:51 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:18.440 14:58:51 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode7 Malloc7 00:18:18.440 14:58:51 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:18.440 14:58:51 -- common/autotest_common.sh@10 -- # set +x 00:18:18.440 14:58:51 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:18.440 14:58:51 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode7 -t tcp -a 10.0.0.2 -s 4420 00:18:18.440 14:58:51 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:18.440 14:58:51 -- common/autotest_common.sh@10 -- # set +x 00:18:18.440 14:58:51 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:18.440 14:58:51 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:18.440 14:58:51 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc8 00:18:18.440 14:58:51 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:18.440 14:58:51 -- common/autotest_common.sh@10 -- # set +x 00:18:18.440 Malloc8 00:18:18.440 14:58:51 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:18.440 14:58:51 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode8 -a -s SPDK8 00:18:18.440 14:58:51 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:18.440 14:58:51 -- common/autotest_common.sh@10 -- # set +x 00:18:18.440 14:58:51 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:18.440 14:58:51 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode8 Malloc8 00:18:18.440 14:58:51 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:18.440 14:58:51 -- common/autotest_common.sh@10 -- # set +x 00:18:18.440 14:58:51 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:18.440 14:58:51 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode8 -t tcp -a 10.0.0.2 -s 4420 00:18:18.440 14:58:51 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:18.440 14:58:51 -- common/autotest_common.sh@10 -- # set +x 00:18:18.440 14:58:51 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:18.440 14:58:51 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:18.440 14:58:51 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc9 00:18:18.440 14:58:51 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:18.440 14:58:51 -- common/autotest_common.sh@10 -- # set +x 00:18:18.440 Malloc9 00:18:18.440 14:58:51 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:18.440 14:58:51 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode9 -a -s SPDK9 00:18:18.440 14:58:51 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:18.440 14:58:51 -- common/autotest_common.sh@10 -- # set +x 00:18:18.700 14:58:51 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:18.700 14:58:51 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode9 Malloc9 00:18:18.700 14:58:51 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:18.700 14:58:51 -- common/autotest_common.sh@10 -- # set +x 00:18:18.700 14:58:51 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:18.700 14:58:51 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode9 -t tcp -a 10.0.0.2 -s 4420 00:18:18.700 14:58:51 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:18.700 14:58:51 -- common/autotest_common.sh@10 -- # set +x 00:18:18.700 14:58:51 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:18.700 14:58:51 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:18.700 14:58:51 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc10 00:18:18.700 14:58:51 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:18.700 14:58:51 -- common/autotest_common.sh@10 -- # set +x 00:18:18.700 Malloc10 00:18:18.700 14:58:51 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:18.700 14:58:51 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode10 -a -s SPDK10 00:18:18.700 14:58:51 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:18.700 14:58:51 -- common/autotest_common.sh@10 -- # set +x 00:18:18.700 14:58:51 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:18.700 14:58:51 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode10 Malloc10 00:18:18.700 14:58:51 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:18.700 14:58:51 -- common/autotest_common.sh@10 -- # set +x 00:18:18.700 14:58:51 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:18.700 14:58:51 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode10 -t tcp -a 10.0.0.2 -s 4420 00:18:18.700 14:58:51 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:18.700 14:58:51 -- common/autotest_common.sh@10 -- # set +x 00:18:18.700 14:58:51 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:18.700 14:58:51 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:18.700 14:58:51 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc11 00:18:18.700 14:58:51 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:18.700 14:58:51 -- common/autotest_common.sh@10 -- # set +x 00:18:18.700 Malloc11 00:18:18.700 14:58:51 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:18.700 14:58:51 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode11 -a -s SPDK11 00:18:18.700 14:58:51 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:18.700 14:58:51 -- common/autotest_common.sh@10 -- # set +x 00:18:18.700 14:58:51 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:18.700 14:58:51 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode11 Malloc11 00:18:18.700 14:58:51 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:18.700 14:58:51 -- common/autotest_common.sh@10 -- # set +x 00:18:18.700 14:58:51 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:18.700 14:58:51 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode11 -t tcp -a 10.0.0.2 -s 4420 00:18:18.700 14:58:51 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:18.700 14:58:51 -- common/autotest_common.sh@10 -- # set +x 00:18:18.700 14:58:51 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:18.700 14:58:51 -- target/multiconnection.sh@28 -- # seq 1 11 00:18:18.700 14:58:51 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:18.700 14:58:51 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:2d843004-a791-47f3-8dd7-3d04462c368b --hostid=2d843004-a791-47f3-8dd7-3d04462c368b -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:18:18.959 14:58:51 -- target/multiconnection.sh@30 -- # waitforserial SPDK1 00:18:18.959 14:58:51 -- common/autotest_common.sh@1187 -- # local i=0 00:18:18.959 14:58:51 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:18:18.959 14:58:51 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:18:18.959 14:58:51 -- common/autotest_common.sh@1194 -- # sleep 2 00:18:20.864 14:58:53 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:18:20.864 14:58:53 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:18:20.864 14:58:53 -- common/autotest_common.sh@1196 -- # grep -c SPDK1 00:18:20.864 14:58:53 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:18:20.864 14:58:53 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:18:20.864 14:58:53 -- common/autotest_common.sh@1197 -- # return 0 00:18:20.864 14:58:53 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:20.864 14:58:53 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:2d843004-a791-47f3-8dd7-3d04462c368b --hostid=2d843004-a791-47f3-8dd7-3d04462c368b -t tcp -n nqn.2016-06.io.spdk:cnode2 -a 10.0.0.2 -s 4420 00:18:21.123 14:58:54 -- target/multiconnection.sh@30 -- # waitforserial SPDK2 00:18:21.123 14:58:54 -- common/autotest_common.sh@1187 -- # local i=0 00:18:21.123 14:58:54 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:18:21.123 14:58:54 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:18:21.123 14:58:54 -- common/autotest_common.sh@1194 -- # sleep 2 00:18:23.038 14:58:56 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:18:23.038 14:58:56 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:18:23.038 14:58:56 -- common/autotest_common.sh@1196 -- # grep -c SPDK2 00:18:23.038 14:58:56 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:18:23.038 14:58:56 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:18:23.038 14:58:56 -- common/autotest_common.sh@1197 -- # return 0 00:18:23.038 14:58:56 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:23.038 14:58:56 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:2d843004-a791-47f3-8dd7-3d04462c368b --hostid=2d843004-a791-47f3-8dd7-3d04462c368b -t tcp -n nqn.2016-06.io.spdk:cnode3 -a 10.0.0.2 -s 4420 00:18:23.296 14:58:56 -- target/multiconnection.sh@30 -- # waitforserial SPDK3 00:18:23.296 14:58:56 -- common/autotest_common.sh@1187 -- # local i=0 00:18:23.296 14:58:56 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:18:23.296 14:58:56 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:18:23.296 14:58:56 -- common/autotest_common.sh@1194 -- # sleep 2 00:18:25.200 14:58:58 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:18:25.200 14:58:58 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:18:25.200 14:58:58 -- common/autotest_common.sh@1196 -- # grep -c SPDK3 00:18:25.200 14:58:58 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:18:25.200 14:58:58 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:18:25.200 14:58:58 -- common/autotest_common.sh@1197 -- # return 0 00:18:25.200 14:58:58 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:25.200 14:58:58 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:2d843004-a791-47f3-8dd7-3d04462c368b --hostid=2d843004-a791-47f3-8dd7-3d04462c368b -t tcp -n nqn.2016-06.io.spdk:cnode4 -a 10.0.0.2 -s 4420 00:18:25.460 14:58:58 -- target/multiconnection.sh@30 -- # waitforserial SPDK4 00:18:25.460 14:58:58 -- common/autotest_common.sh@1187 -- # local i=0 00:18:25.460 14:58:58 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:18:25.460 14:58:58 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:18:25.460 14:58:58 -- common/autotest_common.sh@1194 -- # sleep 2 00:18:27.365 14:59:00 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:18:27.365 14:59:00 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:18:27.365 14:59:00 -- common/autotest_common.sh@1196 -- # grep -c SPDK4 00:18:27.365 14:59:00 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:18:27.365 14:59:00 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:18:27.365 14:59:00 -- common/autotest_common.sh@1197 -- # return 0 00:18:27.625 14:59:00 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:27.625 14:59:00 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:2d843004-a791-47f3-8dd7-3d04462c368b --hostid=2d843004-a791-47f3-8dd7-3d04462c368b -t tcp -n nqn.2016-06.io.spdk:cnode5 -a 10.0.0.2 -s 4420 00:18:27.625 14:59:00 -- target/multiconnection.sh@30 -- # waitforserial SPDK5 00:18:27.625 14:59:00 -- common/autotest_common.sh@1187 -- # local i=0 00:18:27.625 14:59:00 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:18:27.625 14:59:00 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:18:27.625 14:59:00 -- common/autotest_common.sh@1194 -- # sleep 2 00:18:30.160 14:59:02 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:18:30.160 14:59:02 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:18:30.160 14:59:02 -- common/autotest_common.sh@1196 -- # grep -c SPDK5 00:18:30.160 14:59:02 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:18:30.160 14:59:02 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:18:30.160 14:59:02 -- common/autotest_common.sh@1197 -- # return 0 00:18:30.160 14:59:02 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:30.160 14:59:02 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:2d843004-a791-47f3-8dd7-3d04462c368b --hostid=2d843004-a791-47f3-8dd7-3d04462c368b -t tcp -n nqn.2016-06.io.spdk:cnode6 -a 10.0.0.2 -s 4420 00:18:30.160 14:59:02 -- target/multiconnection.sh@30 -- # waitforserial SPDK6 00:18:30.160 14:59:02 -- common/autotest_common.sh@1187 -- # local i=0 00:18:30.160 14:59:02 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:18:30.160 14:59:02 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:18:30.160 14:59:02 -- common/autotest_common.sh@1194 -- # sleep 2 00:18:32.117 14:59:04 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:18:32.118 14:59:04 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:18:32.118 14:59:04 -- common/autotest_common.sh@1196 -- # grep -c SPDK6 00:18:32.118 14:59:04 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:18:32.118 14:59:04 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:18:32.118 14:59:04 -- common/autotest_common.sh@1197 -- # return 0 00:18:32.118 14:59:04 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:32.118 14:59:04 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:2d843004-a791-47f3-8dd7-3d04462c368b --hostid=2d843004-a791-47f3-8dd7-3d04462c368b -t tcp -n nqn.2016-06.io.spdk:cnode7 -a 10.0.0.2 -s 4420 00:18:32.118 14:59:05 -- target/multiconnection.sh@30 -- # waitforserial SPDK7 00:18:32.118 14:59:05 -- common/autotest_common.sh@1187 -- # local i=0 00:18:32.118 14:59:05 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:18:32.118 14:59:05 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:18:32.118 14:59:05 -- common/autotest_common.sh@1194 -- # sleep 2 00:18:34.022 14:59:07 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:18:34.022 14:59:07 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:18:34.022 14:59:07 -- common/autotest_common.sh@1196 -- # grep -c SPDK7 00:18:34.022 14:59:07 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:18:34.022 14:59:07 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:18:34.022 14:59:07 -- common/autotest_common.sh@1197 -- # return 0 00:18:34.022 14:59:07 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:34.022 14:59:07 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:2d843004-a791-47f3-8dd7-3d04462c368b --hostid=2d843004-a791-47f3-8dd7-3d04462c368b -t tcp -n nqn.2016-06.io.spdk:cnode8 -a 10.0.0.2 -s 4420 00:18:34.281 14:59:07 -- target/multiconnection.sh@30 -- # waitforserial SPDK8 00:18:34.281 14:59:07 -- common/autotest_common.sh@1187 -- # local i=0 00:18:34.281 14:59:07 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:18:34.281 14:59:07 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:18:34.281 14:59:07 -- common/autotest_common.sh@1194 -- # sleep 2 00:18:36.184 14:59:09 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:18:36.184 14:59:09 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:18:36.184 14:59:09 -- common/autotest_common.sh@1196 -- # grep -c SPDK8 00:18:36.184 14:59:09 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:18:36.184 14:59:09 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:18:36.184 14:59:09 -- common/autotest_common.sh@1197 -- # return 0 00:18:36.184 14:59:09 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:36.184 14:59:09 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:2d843004-a791-47f3-8dd7-3d04462c368b --hostid=2d843004-a791-47f3-8dd7-3d04462c368b -t tcp -n nqn.2016-06.io.spdk:cnode9 -a 10.0.0.2 -s 4420 00:18:36.443 14:59:09 -- target/multiconnection.sh@30 -- # waitforserial SPDK9 00:18:36.443 14:59:09 -- common/autotest_common.sh@1187 -- # local i=0 00:18:36.443 14:59:09 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:18:36.443 14:59:09 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:18:36.443 14:59:09 -- common/autotest_common.sh@1194 -- # sleep 2 00:18:38.975 14:59:11 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:18:38.975 14:59:11 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:18:38.975 14:59:11 -- common/autotest_common.sh@1196 -- # grep -c SPDK9 00:18:38.975 14:59:11 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:18:38.975 14:59:11 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:18:38.975 14:59:11 -- common/autotest_common.sh@1197 -- # return 0 00:18:38.975 14:59:11 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:38.975 14:59:11 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:2d843004-a791-47f3-8dd7-3d04462c368b --hostid=2d843004-a791-47f3-8dd7-3d04462c368b -t tcp -n nqn.2016-06.io.spdk:cnode10 -a 10.0.0.2 -s 4420 00:18:38.975 14:59:11 -- target/multiconnection.sh@30 -- # waitforserial SPDK10 00:18:38.975 14:59:11 -- common/autotest_common.sh@1187 -- # local i=0 00:18:38.975 14:59:11 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:18:38.975 14:59:11 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:18:38.975 14:59:11 -- common/autotest_common.sh@1194 -- # sleep 2 00:18:40.878 14:59:13 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:18:40.878 14:59:13 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:18:40.878 14:59:13 -- common/autotest_common.sh@1196 -- # grep -c SPDK10 00:18:40.878 14:59:13 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:18:40.878 14:59:13 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:18:40.878 14:59:13 -- common/autotest_common.sh@1197 -- # return 0 00:18:40.878 14:59:13 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:40.878 14:59:13 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:2d843004-a791-47f3-8dd7-3d04462c368b --hostid=2d843004-a791-47f3-8dd7-3d04462c368b -t tcp -n nqn.2016-06.io.spdk:cnode11 -a 10.0.0.2 -s 4420 00:18:40.878 14:59:13 -- target/multiconnection.sh@30 -- # waitforserial SPDK11 00:18:40.878 14:59:13 -- common/autotest_common.sh@1187 -- # local i=0 00:18:40.878 14:59:13 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:18:40.878 14:59:13 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:18:40.878 14:59:13 -- common/autotest_common.sh@1194 -- # sleep 2 00:18:42.781 14:59:15 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:18:43.040 14:59:15 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:18:43.040 14:59:15 -- common/autotest_common.sh@1196 -- # grep -c SPDK11 00:18:43.040 14:59:15 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:18:43.040 14:59:15 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:18:43.040 14:59:15 -- common/autotest_common.sh@1197 -- # return 0 00:18:43.040 14:59:15 -- target/multiconnection.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 262144 -d 64 -t read -r 10 00:18:43.040 [global] 00:18:43.040 thread=1 00:18:43.040 invalidate=1 00:18:43.040 rw=read 00:18:43.040 time_based=1 00:18:43.040 runtime=10 00:18:43.040 ioengine=libaio 00:18:43.040 direct=1 00:18:43.040 bs=262144 00:18:43.040 iodepth=64 00:18:43.040 norandommap=1 00:18:43.040 numjobs=1 00:18:43.040 00:18:43.040 [job0] 00:18:43.040 filename=/dev/nvme0n1 00:18:43.040 [job1] 00:18:43.040 filename=/dev/nvme10n1 00:18:43.040 [job2] 00:18:43.040 filename=/dev/nvme1n1 00:18:43.040 [job3] 00:18:43.040 filename=/dev/nvme2n1 00:18:43.040 [job4] 00:18:43.040 filename=/dev/nvme3n1 00:18:43.040 [job5] 00:18:43.040 filename=/dev/nvme4n1 00:18:43.040 [job6] 00:18:43.040 filename=/dev/nvme5n1 00:18:43.040 [job7] 00:18:43.040 filename=/dev/nvme6n1 00:18:43.040 [job8] 00:18:43.040 filename=/dev/nvme7n1 00:18:43.040 [job9] 00:18:43.040 filename=/dev/nvme8n1 00:18:43.040 [job10] 00:18:43.040 filename=/dev/nvme9n1 00:18:43.040 Could not set queue depth (nvme0n1) 00:18:43.040 Could not set queue depth (nvme10n1) 00:18:43.040 Could not set queue depth (nvme1n1) 00:18:43.040 Could not set queue depth (nvme2n1) 00:18:43.040 Could not set queue depth (nvme3n1) 00:18:43.040 Could not set queue depth (nvme4n1) 00:18:43.040 Could not set queue depth (nvme5n1) 00:18:43.040 Could not set queue depth (nvme6n1) 00:18:43.040 Could not set queue depth (nvme7n1) 00:18:43.040 Could not set queue depth (nvme8n1) 00:18:43.040 Could not set queue depth (nvme9n1) 00:18:43.298 job0: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:18:43.298 job1: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:18:43.298 job2: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:18:43.298 job3: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:18:43.298 job4: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:18:43.298 job5: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:18:43.298 job6: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:18:43.298 job7: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:18:43.298 job8: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:18:43.298 job9: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:18:43.298 job10: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:18:43.298 fio-3.35 00:18:43.298 Starting 11 threads 00:18:55.506 00:18:55.506 job0: (groupid=0, jobs=1): err= 0: pid=91206: Sun Dec 1 14:59:26 2024 00:18:55.506 read: IOPS=256, BW=64.2MiB/s (67.3MB/s)(654MiB/10182msec) 00:18:55.506 slat (usec): min=22, max=113551, avg=3814.74, stdev=11723.47 00:18:55.507 clat (msec): min=85, max=433, avg=244.67, stdev=28.64 00:18:55.507 lat (msec): min=86, max=433, avg=248.49, stdev=30.68 00:18:55.507 clat percentiles (msec): 00:18:55.507 | 1.00th=[ 171], 5.00th=[ 201], 10.00th=[ 215], 20.00th=[ 228], 00:18:55.507 | 30.00th=[ 234], 40.00th=[ 241], 50.00th=[ 245], 60.00th=[ 251], 00:18:55.507 | 70.00th=[ 257], 80.00th=[ 266], 90.00th=[ 275], 95.00th=[ 279], 00:18:55.507 | 99.00th=[ 300], 99.50th=[ 359], 99.90th=[ 435], 99.95th=[ 435], 00:18:55.507 | 99.99th=[ 435] 00:18:55.507 bw ( KiB/s): min=58997, max=71680, per=4.73%, avg=65317.60, stdev=3532.54, samples=20 00:18:55.507 iops : min= 230, max= 280, avg=255.05, stdev=13.87, samples=20 00:18:55.507 lat (msec) : 100=0.15%, 250=56.67%, 500=43.17% 00:18:55.507 cpu : usr=0.13%, sys=1.02%, ctx=616, majf=0, minf=4097 00:18:55.507 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.3%, 16=0.6%, 32=1.2%, >=64=97.6% 00:18:55.507 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:55.507 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:18:55.507 issued rwts: total=2615,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:55.507 latency : target=0, window=0, percentile=100.00%, depth=64 00:18:55.507 job1: (groupid=0, jobs=1): err= 0: pid=91207: Sun Dec 1 14:59:26 2024 00:18:55.507 read: IOPS=375, BW=94.0MiB/s (98.5MB/s)(958MiB/10188msec) 00:18:55.507 slat (usec): min=15, max=164477, avg=2499.80, stdev=11336.71 00:18:55.507 clat (usec): min=1793, max=442364, avg=167215.69, stdev=111213.64 00:18:55.507 lat (usec): min=1826, max=442400, avg=169715.49, stdev=113377.22 00:18:55.507 clat percentiles (msec): 00:18:55.507 | 1.00th=[ 5], 5.00th=[ 11], 10.00th=[ 16], 20.00th=[ 34], 00:18:55.507 | 30.00th=[ 45], 40.00th=[ 199], 50.00th=[ 226], 60.00th=[ 243], 00:18:55.507 | 70.00th=[ 253], 80.00th=[ 266], 90.00th=[ 279], 95.00th=[ 292], 00:18:55.507 | 99.00th=[ 330], 99.50th=[ 397], 99.90th=[ 430], 99.95th=[ 430], 00:18:55.507 | 99.99th=[ 443] 00:18:55.507 bw ( KiB/s): min=48640, max=299944, per=6.97%, avg=96319.35, stdev=81770.85, samples=20 00:18:55.507 iops : min= 190, max= 1171, avg=376.00, stdev=319.27, samples=20 00:18:55.507 lat (msec) : 2=0.03%, 4=0.78%, 10=3.92%, 20=8.56%, 50=18.36% 00:18:55.507 lat (msec) : 100=7.05%, 250=28.41%, 500=32.90% 00:18:55.507 cpu : usr=0.11%, sys=1.40%, ctx=827, majf=0, minf=4097 00:18:55.507 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.4% 00:18:55.507 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:55.507 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:18:55.507 issued rwts: total=3830,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:55.507 latency : target=0, window=0, percentile=100.00%, depth=64 00:18:55.507 job2: (groupid=0, jobs=1): err= 0: pid=91208: Sun Dec 1 14:59:26 2024 00:18:55.507 read: IOPS=305, BW=76.3MiB/s (80.0MB/s)(777MiB/10188msec) 00:18:55.507 slat (usec): min=16, max=183600, avg=3120.36, stdev=12800.39 00:18:55.507 clat (msec): min=2, max=447, avg=206.23, stdev=86.88 00:18:55.507 lat (msec): min=2, max=447, avg=209.35, stdev=88.76 00:18:55.507 clat percentiles (msec): 00:18:55.507 | 1.00th=[ 11], 5.00th=[ 44], 10.00th=[ 61], 20.00th=[ 78], 00:18:55.507 | 30.00th=[ 211], 40.00th=[ 230], 50.00th=[ 243], 60.00th=[ 249], 00:18:55.507 | 70.00th=[ 255], 80.00th=[ 264], 90.00th=[ 275], 95.00th=[ 292], 00:18:55.507 | 99.00th=[ 380], 99.50th=[ 414], 99.90th=[ 447], 99.95th=[ 447], 00:18:55.507 | 99.99th=[ 447] 00:18:55.507 bw ( KiB/s): min=55185, max=208896, per=5.63%, avg=77894.75, stdev=42592.64, samples=20 00:18:55.507 iops : min= 215, max= 816, avg=304.05, stdev=166.31, samples=20 00:18:55.507 lat (msec) : 4=0.03%, 10=0.84%, 20=2.28%, 50=2.25%, 100=16.02% 00:18:55.507 lat (msec) : 250=39.02%, 500=39.56% 00:18:55.507 cpu : usr=0.14%, sys=1.14%, ctx=627, majf=0, minf=4098 00:18:55.507 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.3%, 16=0.5%, 32=1.0%, >=64=98.0% 00:18:55.507 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:55.507 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:18:55.507 issued rwts: total=3109,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:55.507 latency : target=0, window=0, percentile=100.00%, depth=64 00:18:55.507 job3: (groupid=0, jobs=1): err= 0: pid=91209: Sun Dec 1 14:59:26 2024 00:18:55.507 read: IOPS=363, BW=90.9MiB/s (95.3MB/s)(924MiB/10172msec) 00:18:55.507 slat (usec): min=17, max=221841, avg=2561.23, stdev=11527.09 00:18:55.507 clat (msec): min=2, max=447, avg=173.07, stdev=98.63 00:18:55.507 lat (msec): min=2, max=447, avg=175.63, stdev=100.55 00:18:55.507 clat percentiles (msec): 00:18:55.507 | 1.00th=[ 13], 5.00th=[ 25], 10.00th=[ 30], 20.00th=[ 68], 00:18:55.507 | 30.00th=[ 91], 40.00th=[ 108], 50.00th=[ 224], 60.00th=[ 241], 00:18:55.507 | 70.00th=[ 251], 80.00th=[ 264], 90.00th=[ 275], 95.00th=[ 284], 00:18:55.507 | 99.00th=[ 342], 99.50th=[ 359], 99.90th=[ 447], 99.95th=[ 447], 00:18:55.507 | 99.99th=[ 447] 00:18:55.507 bw ( KiB/s): min=57229, max=292864, per=6.73%, avg=93042.45, stdev=64956.51, samples=20 00:18:55.507 iops : min= 223, max= 1144, avg=363.30, stdev=253.71, samples=20 00:18:55.507 lat (msec) : 4=0.11%, 10=0.76%, 20=1.33%, 50=15.93%, 100=18.66% 00:18:55.507 lat (msec) : 250=32.27%, 500=30.94% 00:18:55.507 cpu : usr=0.17%, sys=1.33%, ctx=751, majf=0, minf=4097 00:18:55.507 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.9%, >=64=98.3% 00:18:55.507 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:55.507 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:18:55.507 issued rwts: total=3697,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:55.507 latency : target=0, window=0, percentile=100.00%, depth=64 00:18:55.507 job4: (groupid=0, jobs=1): err= 0: pid=91210: Sun Dec 1 14:59:26 2024 00:18:55.507 read: IOPS=1386, BW=347MiB/s (364MB/s)(3473MiB/10015msec) 00:18:55.507 slat (usec): min=20, max=77988, avg=707.54, stdev=3297.11 00:18:55.507 clat (msec): min=4, max=224, avg=45.35, stdev=27.22 00:18:55.507 lat (msec): min=4, max=224, avg=46.05, stdev=27.64 00:18:55.507 clat percentiles (msec): 00:18:55.507 | 1.00th=[ 18], 5.00th=[ 22], 10.00th=[ 24], 20.00th=[ 26], 00:18:55.507 | 30.00th=[ 29], 40.00th=[ 32], 50.00th=[ 36], 60.00th=[ 40], 00:18:55.507 | 70.00th=[ 50], 80.00th=[ 66], 90.00th=[ 83], 95.00th=[ 95], 00:18:55.507 | 99.00th=[ 161], 99.50th=[ 182], 99.90th=[ 220], 99.95th=[ 222], 00:18:55.507 | 99.99th=[ 224] 00:18:55.507 bw ( KiB/s): min=92672, max=556032, per=25.60%, avg=353905.60, stdev=163812.97, samples=20 00:18:55.507 iops : min= 362, max= 2172, avg=1382.40, stdev=639.94, samples=20 00:18:55.507 lat (msec) : 10=0.07%, 20=2.84%, 50=67.60%, 100=25.54%, 250=3.95% 00:18:55.507 cpu : usr=0.41%, sys=3.95%, ctx=2889, majf=0, minf=4097 00:18:55.507 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.5% 00:18:55.507 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:55.507 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:18:55.507 issued rwts: total=13890,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:55.507 latency : target=0, window=0, percentile=100.00%, depth=64 00:18:55.507 job5: (groupid=0, jobs=1): err= 0: pid=91211: Sun Dec 1 14:59:26 2024 00:18:55.507 read: IOPS=744, BW=186MiB/s (195MB/s)(1869MiB/10036msec) 00:18:55.507 slat (usec): min=18, max=72778, avg=1295.07, stdev=5275.77 00:18:55.507 clat (msec): min=8, max=284, avg=84.38, stdev=42.02 00:18:55.507 lat (msec): min=9, max=289, avg=85.68, stdev=42.64 00:18:55.507 clat percentiles (msec): 00:18:55.507 | 1.00th=[ 25], 5.00th=[ 46], 10.00th=[ 52], 20.00th=[ 58], 00:18:55.507 | 30.00th=[ 63], 40.00th=[ 66], 50.00th=[ 70], 60.00th=[ 77], 00:18:55.507 | 70.00th=[ 84], 80.00th=[ 103], 90.00th=[ 148], 95.00th=[ 180], 00:18:55.507 | 99.00th=[ 239], 99.50th=[ 249], 99.90th=[ 275], 99.95th=[ 284], 00:18:55.507 | 99.99th=[ 284] 00:18:55.507 bw ( KiB/s): min=65536, max=275968, per=13.72%, avg=189611.85, stdev=69905.89, samples=20 00:18:55.507 iops : min= 256, max= 1078, avg=740.50, stdev=273.10, samples=20 00:18:55.507 lat (msec) : 10=0.04%, 20=0.64%, 50=7.97%, 100=70.77%, 250=20.08% 00:18:55.507 lat (msec) : 500=0.49% 00:18:55.507 cpu : usr=0.26%, sys=2.53%, ctx=1202, majf=0, minf=4097 00:18:55.507 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:18:55.507 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:55.507 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:18:55.507 issued rwts: total=7476,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:55.507 latency : target=0, window=0, percentile=100.00%, depth=64 00:18:55.507 job6: (groupid=0, jobs=1): err= 0: pid=91212: Sun Dec 1 14:59:26 2024 00:18:55.507 read: IOPS=296, BW=74.2MiB/s (77.8MB/s)(756MiB/10187msec) 00:18:55.507 slat (usec): min=19, max=148873, avg=3245.57, stdev=13306.58 00:18:55.507 clat (msec): min=19, max=401, avg=211.82, stdev=77.94 00:18:55.507 lat (msec): min=22, max=426, avg=215.06, stdev=80.05 00:18:55.507 clat percentiles (msec): 00:18:55.507 | 1.00th=[ 34], 5.00th=[ 69], 10.00th=[ 87], 20.00th=[ 108], 00:18:55.507 | 30.00th=[ 218], 40.00th=[ 232], 50.00th=[ 243], 60.00th=[ 251], 00:18:55.507 | 70.00th=[ 259], 80.00th=[ 266], 90.00th=[ 279], 95.00th=[ 292], 00:18:55.507 | 99.00th=[ 368], 99.50th=[ 368], 99.90th=[ 401], 99.95th=[ 401], 00:18:55.507 | 99.99th=[ 401] 00:18:55.507 bw ( KiB/s): min=39856, max=175426, per=5.48%, avg=75699.30, stdev=35264.64, samples=20 00:18:55.507 iops : min= 155, max= 685, avg=295.50, stdev=137.81, samples=20 00:18:55.507 lat (msec) : 20=0.03%, 50=3.11%, 100=13.23%, 250=44.08%, 500=39.55% 00:18:55.507 cpu : usr=0.12%, sys=1.20%, ctx=598, majf=0, minf=4097 00:18:55.507 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.3%, 16=0.5%, 32=1.1%, >=64=97.9% 00:18:55.507 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:55.507 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:18:55.507 issued rwts: total=3024,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:55.507 latency : target=0, window=0, percentile=100.00%, depth=64 00:18:55.507 job7: (groupid=0, jobs=1): err= 0: pid=91213: Sun Dec 1 14:59:26 2024 00:18:55.507 read: IOPS=256, BW=64.1MiB/s (67.2MB/s)(653MiB/10183msec) 00:18:55.507 slat (usec): min=16, max=125960, avg=3644.60, stdev=12063.13 00:18:55.507 clat (msec): min=13, max=454, avg=245.61, stdev=55.53 00:18:55.507 lat (msec): min=14, max=454, avg=249.26, stdev=57.23 00:18:55.507 clat percentiles (msec): 00:18:55.507 | 1.00th=[ 23], 5.00th=[ 150], 10.00th=[ 203], 20.00th=[ 228], 00:18:55.508 | 30.00th=[ 239], 40.00th=[ 249], 50.00th=[ 255], 60.00th=[ 262], 00:18:55.508 | 70.00th=[ 268], 80.00th=[ 275], 90.00th=[ 292], 95.00th=[ 305], 00:18:55.508 | 99.00th=[ 338], 99.50th=[ 393], 99.90th=[ 456], 99.95th=[ 456], 00:18:55.508 | 99.99th=[ 456] 00:18:55.508 bw ( KiB/s): min=53652, max=116456, per=4.72%, avg=65179.80, stdev=13431.06, samples=20 00:18:55.508 iops : min= 209, max= 454, avg=254.35, stdev=52.38, samples=20 00:18:55.508 lat (msec) : 20=0.69%, 50=2.99%, 100=0.19%, 250=38.28%, 500=57.85% 00:18:55.508 cpu : usr=0.13%, sys=1.08%, ctx=638, majf=0, minf=4097 00:18:55.508 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.3%, 16=0.6%, 32=1.2%, >=64=97.6% 00:18:55.508 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:55.508 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:18:55.508 issued rwts: total=2610,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:55.508 latency : target=0, window=0, percentile=100.00%, depth=64 00:18:55.508 job8: (groupid=0, jobs=1): err= 0: pid=91214: Sun Dec 1 14:59:26 2024 00:18:55.508 read: IOPS=938, BW=235MiB/s (246MB/s)(2354MiB/10036msec) 00:18:55.508 slat (usec): min=17, max=117070, avg=1030.66, stdev=4568.28 00:18:55.508 clat (usec): min=408, max=254914, avg=67057.13, stdev=33881.10 00:18:55.508 lat (usec): min=1295, max=323393, avg=68087.80, stdev=34476.14 00:18:55.508 clat percentiles (msec): 00:18:55.508 | 1.00th=[ 5], 5.00th=[ 27], 10.00th=[ 34], 20.00th=[ 46], 00:18:55.508 | 30.00th=[ 55], 40.00th=[ 61], 50.00th=[ 64], 60.00th=[ 68], 00:18:55.508 | 70.00th=[ 72], 80.00th=[ 79], 90.00th=[ 94], 95.00th=[ 125], 00:18:55.508 | 99.00th=[ 218], 99.50th=[ 228], 99.90th=[ 241], 99.95th=[ 245], 00:18:55.508 | 99.99th=[ 255] 00:18:55.508 bw ( KiB/s): min=76494, max=453237, per=17.31%, avg=239239.95, stdev=90579.30, samples=20 00:18:55.508 iops : min= 298, max= 1770, avg=934.40, stdev=353.79, samples=20 00:18:55.508 lat (usec) : 500=0.01% 00:18:55.508 lat (msec) : 2=0.01%, 4=0.79%, 10=1.14%, 20=0.32%, 50=21.97% 00:18:55.508 lat (msec) : 100=67.97%, 250=7.78%, 500=0.01% 00:18:55.508 cpu : usr=0.28%, sys=3.02%, ctx=1932, majf=0, minf=4097 00:18:55.508 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.3% 00:18:55.508 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:55.508 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:18:55.508 issued rwts: total=9417,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:55.508 latency : target=0, window=0, percentile=100.00%, depth=64 00:18:55.508 job9: (groupid=0, jobs=1): err= 0: pid=91215: Sun Dec 1 14:59:26 2024 00:18:55.508 read: IOPS=262, BW=65.7MiB/s (68.9MB/s)(669MiB/10185msec) 00:18:55.508 slat (usec): min=20, max=163933, avg=3593.98, stdev=13479.69 00:18:55.508 clat (msec): min=19, max=447, avg=239.39, stdev=55.58 00:18:55.508 lat (msec): min=19, max=467, avg=242.99, stdev=57.80 00:18:55.508 clat percentiles (msec): 00:18:55.508 | 1.00th=[ 31], 5.00th=[ 108], 10.00th=[ 197], 20.00th=[ 220], 00:18:55.508 | 30.00th=[ 232], 40.00th=[ 243], 50.00th=[ 251], 60.00th=[ 257], 00:18:55.508 | 70.00th=[ 264], 80.00th=[ 271], 90.00th=[ 288], 95.00th=[ 300], 00:18:55.508 | 99.00th=[ 338], 99.50th=[ 372], 99.90th=[ 447], 99.95th=[ 447], 00:18:55.508 | 99.99th=[ 447] 00:18:55.508 bw ( KiB/s): min=49053, max=96768, per=4.84%, avg=66858.65, stdev=11190.27, samples=20 00:18:55.508 iops : min= 191, max= 378, avg=260.95, stdev=43.78, samples=20 00:18:55.508 lat (msec) : 20=0.22%, 50=2.13%, 100=1.83%, 250=43.56%, 500=52.26% 00:18:55.508 cpu : usr=0.06%, sys=1.06%, ctx=686, majf=0, minf=4097 00:18:55.508 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.3%, 16=0.6%, 32=1.2%, >=64=97.6% 00:18:55.508 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:55.508 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:18:55.508 issued rwts: total=2677,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:55.508 latency : target=0, window=0, percentile=100.00%, depth=64 00:18:55.508 job10: (groupid=0, jobs=1): err= 0: pid=91216: Sun Dec 1 14:59:26 2024 00:18:55.508 read: IOPS=262, BW=65.6MiB/s (68.8MB/s)(667MiB/10173msec) 00:18:55.508 slat (usec): min=17, max=105446, avg=3487.46, stdev=11379.31 00:18:55.508 clat (msec): min=4, max=399, avg=239.73, stdev=50.06 00:18:55.508 lat (msec): min=5, max=399, avg=243.22, stdev=51.83 00:18:55.508 clat percentiles (msec): 00:18:55.508 | 1.00th=[ 15], 5.00th=[ 174], 10.00th=[ 203], 20.00th=[ 222], 00:18:55.508 | 30.00th=[ 232], 40.00th=[ 243], 50.00th=[ 249], 60.00th=[ 255], 00:18:55.508 | 70.00th=[ 264], 80.00th=[ 271], 90.00th=[ 279], 95.00th=[ 292], 00:18:55.508 | 99.00th=[ 321], 99.50th=[ 330], 99.90th=[ 401], 99.95th=[ 401], 00:18:55.508 | 99.99th=[ 401] 00:18:55.508 bw ( KiB/s): min=56832, max=94720, per=4.82%, avg=66670.00, stdev=9486.00, samples=20 00:18:55.508 iops : min= 222, max= 370, avg=260.30, stdev=37.08, samples=20 00:18:55.508 lat (msec) : 10=0.34%, 20=1.46%, 50=0.90%, 100=0.97%, 250=49.19% 00:18:55.508 lat (msec) : 500=47.13% 00:18:55.508 cpu : usr=0.15%, sys=1.10%, ctx=557, majf=0, minf=4097 00:18:55.508 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.3%, 16=0.6%, 32=1.2%, >=64=97.6% 00:18:55.508 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:55.508 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:18:55.508 issued rwts: total=2669,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:55.508 latency : target=0, window=0, percentile=100.00%, depth=64 00:18:55.508 00:18:55.508 Run status group 0 (all jobs): 00:18:55.508 READ: bw=1350MiB/s (1416MB/s), 64.1MiB/s-347MiB/s (67.2MB/s-364MB/s), io=13.4GiB (14.4GB), run=10015-10188msec 00:18:55.508 00:18:55.508 Disk stats (read/write): 00:18:55.508 nvme0n1: ios=5116/0, merge=0/0, ticks=1235657/0, in_queue=1235657, util=97.06% 00:18:55.508 nvme10n1: ios=7532/0, merge=0/0, ticks=1224896/0, in_queue=1224896, util=97.20% 00:18:55.508 nvme1n1: ios=6104/0, merge=0/0, ticks=1230324/0, in_queue=1230324, util=97.98% 00:18:55.508 nvme2n1: ios=7266/0, merge=0/0, ticks=1224436/0, in_queue=1224436, util=97.50% 00:18:55.508 nvme3n1: ios=27652/0, merge=0/0, ticks=1208281/0, in_queue=1208281, util=96.83% 00:18:55.508 nvme4n1: ios=14926/0, merge=0/0, ticks=1236973/0, in_queue=1236973, util=97.69% 00:18:55.508 nvme5n1: ios=5920/0, merge=0/0, ticks=1225831/0, in_queue=1225831, util=98.27% 00:18:55.508 nvme6n1: ios=5100/0, merge=0/0, ticks=1230531/0, in_queue=1230531, util=98.19% 00:18:55.508 nvme7n1: ios=18743/0, merge=0/0, ticks=1231967/0, in_queue=1231967, util=98.08% 00:18:55.508 nvme8n1: ios=5228/0, merge=0/0, ticks=1229199/0, in_queue=1229199, util=98.57% 00:18:55.508 nvme9n1: ios=5210/0, merge=0/0, ticks=1234174/0, in_queue=1234174, util=98.56% 00:18:55.508 14:59:26 -- target/multiconnection.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 262144 -d 64 -t randwrite -r 10 00:18:55.508 [global] 00:18:55.508 thread=1 00:18:55.508 invalidate=1 00:18:55.508 rw=randwrite 00:18:55.508 time_based=1 00:18:55.508 runtime=10 00:18:55.508 ioengine=libaio 00:18:55.508 direct=1 00:18:55.508 bs=262144 00:18:55.508 iodepth=64 00:18:55.508 norandommap=1 00:18:55.508 numjobs=1 00:18:55.508 00:18:55.508 [job0] 00:18:55.508 filename=/dev/nvme0n1 00:18:55.508 [job1] 00:18:55.508 filename=/dev/nvme10n1 00:18:55.508 [job2] 00:18:55.508 filename=/dev/nvme1n1 00:18:55.508 [job3] 00:18:55.508 filename=/dev/nvme2n1 00:18:55.508 [job4] 00:18:55.508 filename=/dev/nvme3n1 00:18:55.508 [job5] 00:18:55.508 filename=/dev/nvme4n1 00:18:55.508 [job6] 00:18:55.508 filename=/dev/nvme5n1 00:18:55.508 [job7] 00:18:55.508 filename=/dev/nvme6n1 00:18:55.508 [job8] 00:18:55.508 filename=/dev/nvme7n1 00:18:55.508 [job9] 00:18:55.508 filename=/dev/nvme8n1 00:18:55.508 [job10] 00:18:55.508 filename=/dev/nvme9n1 00:18:55.508 Could not set queue depth (nvme0n1) 00:18:55.508 Could not set queue depth (nvme10n1) 00:18:55.508 Could not set queue depth (nvme1n1) 00:18:55.508 Could not set queue depth (nvme2n1) 00:18:55.508 Could not set queue depth (nvme3n1) 00:18:55.508 Could not set queue depth (nvme4n1) 00:18:55.508 Could not set queue depth (nvme5n1) 00:18:55.508 Could not set queue depth (nvme6n1) 00:18:55.508 Could not set queue depth (nvme7n1) 00:18:55.508 Could not set queue depth (nvme8n1) 00:18:55.508 Could not set queue depth (nvme9n1) 00:18:55.508 job0: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:18:55.508 job1: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:18:55.508 job2: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:18:55.508 job3: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:18:55.508 job4: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:18:55.508 job5: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:18:55.508 job6: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:18:55.508 job7: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:18:55.508 job8: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:18:55.508 job9: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:18:55.508 job10: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:18:55.508 fio-3.35 00:18:55.508 Starting 11 threads 00:19:05.490 00:19:05.490 job0: (groupid=0, jobs=1): err= 0: pid=91411: Sun Dec 1 14:59:37 2024 00:19:05.490 write: IOPS=583, BW=146MiB/s (153MB/s)(1470MiB/10082msec); 0 zone resets 00:19:05.490 slat (usec): min=19, max=38589, avg=1695.33, stdev=2922.16 00:19:05.490 clat (msec): min=41, max=185, avg=108.03, stdev=12.04 00:19:05.490 lat (msec): min=41, max=185, avg=109.72, stdev=11.88 00:19:05.490 clat percentiles (msec): 00:19:05.490 | 1.00th=[ 99], 5.00th=[ 100], 10.00th=[ 101], 20.00th=[ 103], 00:19:05.490 | 30.00th=[ 106], 40.00th=[ 106], 50.00th=[ 107], 60.00th=[ 107], 00:19:05.490 | 70.00th=[ 108], 80.00th=[ 109], 90.00th=[ 112], 95.00th=[ 134], 00:19:05.490 | 99.00th=[ 161], 99.50th=[ 161], 99.90th=[ 180], 99.95th=[ 184], 00:19:05.490 | 99.99th=[ 186] 00:19:05.490 bw ( KiB/s): min=96256, max=155648, per=11.03%, avg=148829.05, stdev=13382.67, samples=20 00:19:05.490 iops : min= 376, max= 608, avg=581.20, stdev=52.25, samples=20 00:19:05.490 lat (msec) : 50=0.07%, 100=9.97%, 250=89.96% 00:19:05.490 cpu : usr=1.15%, sys=1.80%, ctx=6823, majf=0, minf=1 00:19:05.490 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=98.9% 00:19:05.490 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:05.490 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:19:05.490 issued rwts: total=0,5879,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:05.490 latency : target=0, window=0, percentile=100.00%, depth=64 00:19:05.490 job1: (groupid=0, jobs=1): err= 0: pid=91412: Sun Dec 1 14:59:37 2024 00:19:05.490 write: IOPS=695, BW=174MiB/s (182MB/s)(1754MiB/10085msec); 0 zone resets 00:19:05.490 slat (usec): min=27, max=13006, avg=1420.52, stdev=2388.57 00:19:05.490 clat (msec): min=3, max=173, avg=90.52, stdev= 7.12 00:19:05.490 lat (msec): min=3, max=173, avg=91.94, stdev= 6.84 00:19:05.490 clat percentiles (msec): 00:19:05.490 | 1.00th=[ 85], 5.00th=[ 86], 10.00th=[ 87], 20.00th=[ 88], 00:19:05.490 | 30.00th=[ 90], 40.00th=[ 91], 50.00th=[ 92], 60.00th=[ 92], 00:19:05.490 | 70.00th=[ 92], 80.00th=[ 93], 90.00th=[ 94], 95.00th=[ 94], 00:19:05.490 | 99.00th=[ 106], 99.50th=[ 122], 99.90th=[ 163], 99.95th=[ 167], 00:19:05.490 | 99.99th=[ 174] 00:19:05.490 bw ( KiB/s): min=172376, max=180736, per=13.19%, avg=177986.20, stdev=1815.92, samples=20 00:19:05.490 iops : min= 673, max= 706, avg=695.15, stdev= 7.16, samples=20 00:19:05.490 lat (msec) : 4=0.06%, 10=0.04%, 20=0.11%, 50=0.23%, 100=98.30% 00:19:05.490 lat (msec) : 250=1.25% 00:19:05.490 cpu : usr=1.80%, sys=1.89%, ctx=8203, majf=0, minf=2 00:19:05.490 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.1% 00:19:05.490 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:05.490 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:19:05.490 issued rwts: total=0,7017,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:05.490 latency : target=0, window=0, percentile=100.00%, depth=64 00:19:05.490 job2: (groupid=0, jobs=1): err= 0: pid=91424: Sun Dec 1 14:59:37 2024 00:19:05.490 write: IOPS=365, BW=91.4MiB/s (95.8MB/s)(928MiB/10160msec); 0 zone resets 00:19:05.490 slat (usec): min=27, max=68803, avg=2690.38, stdev=4740.64 00:19:05.490 clat (msec): min=9, max=323, avg=172.33, stdev=19.74 00:19:05.491 lat (msec): min=9, max=323, avg=175.02, stdev=19.43 00:19:05.491 clat percentiles (msec): 00:19:05.491 | 1.00th=[ 77], 5.00th=[ 163], 10.00th=[ 163], 20.00th=[ 165], 00:19:05.491 | 30.00th=[ 174], 40.00th=[ 174], 50.00th=[ 174], 60.00th=[ 176], 00:19:05.491 | 70.00th=[ 176], 80.00th=[ 176], 90.00th=[ 178], 95.00th=[ 188], 00:19:05.491 | 99.00th=[ 226], 99.50th=[ 271], 99.90th=[ 313], 99.95th=[ 326], 00:19:05.491 | 99.99th=[ 326] 00:19:05.491 bw ( KiB/s): min=86528, max=96256, per=6.92%, avg=93421.35, stdev=2067.87, samples=20 00:19:05.491 iops : min= 338, max= 376, avg=364.90, stdev= 8.08, samples=20 00:19:05.491 lat (msec) : 10=0.13%, 20=0.22%, 50=0.32%, 100=0.65%, 250=97.98% 00:19:05.491 lat (msec) : 500=0.70% 00:19:05.491 cpu : usr=0.88%, sys=1.23%, ctx=4294, majf=0, minf=1 00:19:05.491 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.9%, >=64=98.3% 00:19:05.491 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:05.491 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:19:05.491 issued rwts: total=0,3713,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:05.491 latency : target=0, window=0, percentile=100.00%, depth=64 00:19:05.491 job3: (groupid=0, jobs=1): err= 0: pid=91425: Sun Dec 1 14:59:37 2024 00:19:05.491 write: IOPS=423, BW=106MiB/s (111MB/s)(1074MiB/10134msec); 0 zone resets 00:19:05.491 slat (usec): min=18, max=12892, avg=2323.74, stdev=3982.51 00:19:05.491 clat (msec): min=6, max=283, avg=148.60, stdev=18.76 00:19:05.491 lat (msec): min=7, max=283, avg=150.93, stdev=18.63 00:19:05.491 clat percentiles (msec): 00:19:05.491 | 1.00th=[ 86], 5.00th=[ 109], 10.00th=[ 142], 20.00th=[ 146], 00:19:05.491 | 30.00th=[ 148], 40.00th=[ 153], 50.00th=[ 155], 60.00th=[ 155], 00:19:05.491 | 70.00th=[ 155], 80.00th=[ 157], 90.00th=[ 157], 95.00th=[ 159], 00:19:05.491 | 99.00th=[ 178], 99.50th=[ 234], 99.90th=[ 275], 99.95th=[ 275], 00:19:05.491 | 99.99th=[ 284] 00:19:05.491 bw ( KiB/s): min=104239, max=146432, per=8.03%, avg=108307.45, stdev=9372.93, samples=20 00:19:05.491 iops : min= 407, max= 572, avg=423.05, stdev=36.62, samples=20 00:19:05.491 lat (msec) : 10=0.05%, 20=0.09%, 50=0.37%, 100=0.77%, 250=98.39% 00:19:05.491 lat (msec) : 500=0.33% 00:19:05.491 cpu : usr=0.89%, sys=1.37%, ctx=6412, majf=0, minf=1 00:19:05.491 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.7%, >=64=98.5% 00:19:05.491 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:05.491 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:19:05.491 issued rwts: total=0,4295,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:05.491 latency : target=0, window=0, percentile=100.00%, depth=64 00:19:05.491 job4: (groupid=0, jobs=1): err= 0: pid=91426: Sun Dec 1 14:59:37 2024 00:19:05.491 write: IOPS=366, BW=91.7MiB/s (96.1MB/s)(930MiB/10149msec); 0 zone resets 00:19:05.491 slat (usec): min=17, max=52979, avg=2682.48, stdev=4678.35 00:19:05.491 clat (msec): min=17, max=319, avg=171.78, stdev=15.32 00:19:05.491 lat (msec): min=17, max=319, avg=174.47, stdev=14.80 00:19:05.491 clat percentiles (msec): 00:19:05.491 | 1.00th=[ 140], 5.00th=[ 163], 10.00th=[ 163], 20.00th=[ 165], 00:19:05.491 | 30.00th=[ 171], 40.00th=[ 174], 50.00th=[ 174], 60.00th=[ 174], 00:19:05.491 | 70.00th=[ 176], 80.00th=[ 176], 90.00th=[ 178], 95.00th=[ 178], 00:19:05.491 | 99.00th=[ 224], 99.50th=[ 266], 99.90th=[ 309], 99.95th=[ 321], 00:19:05.491 | 99.99th=[ 321] 00:19:05.491 bw ( KiB/s): min=83968, max=98304, per=6.94%, avg=93616.70, stdev=2580.40, samples=20 00:19:05.491 iops : min= 328, max= 384, avg=365.65, stdev=10.09, samples=20 00:19:05.491 lat (msec) : 20=0.11%, 50=0.21%, 100=0.21%, 250=98.76%, 500=0.70% 00:19:05.491 cpu : usr=0.71%, sys=1.04%, ctx=4773, majf=0, minf=1 00:19:05.491 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.9%, >=64=98.3% 00:19:05.491 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:05.491 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:19:05.491 issued rwts: total=0,3721,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:05.491 latency : target=0, window=0, percentile=100.00%, depth=64 00:19:05.491 job5: (groupid=0, jobs=1): err= 0: pid=91427: Sun Dec 1 14:59:37 2024 00:19:05.491 write: IOPS=364, BW=91.2MiB/s (95.7MB/s)(926MiB/10150msec); 0 zone resets 00:19:05.491 slat (usec): min=21, max=36245, avg=2695.36, stdev=4651.41 00:19:05.491 clat (msec): min=17, max=314, avg=172.60, stdev=17.59 00:19:05.491 lat (msec): min=17, max=314, avg=175.30, stdev=17.24 00:19:05.491 clat percentiles (msec): 00:19:05.491 | 1.00th=[ 94], 5.00th=[ 163], 10.00th=[ 163], 20.00th=[ 167], 00:19:05.491 | 30.00th=[ 174], 40.00th=[ 174], 50.00th=[ 174], 60.00th=[ 176], 00:19:05.491 | 70.00th=[ 176], 80.00th=[ 178], 90.00th=[ 182], 95.00th=[ 186], 00:19:05.491 | 99.00th=[ 218], 99.50th=[ 262], 99.90th=[ 305], 99.95th=[ 317], 00:19:05.491 | 99.99th=[ 317] 00:19:05.491 bw ( KiB/s): min=88576, max=96256, per=6.91%, avg=93190.90, stdev=1753.17, samples=20 00:19:05.491 iops : min= 346, max= 376, avg=363.95, stdev= 6.89, samples=20 00:19:05.491 lat (msec) : 20=0.08%, 50=0.43%, 100=0.57%, 250=98.22%, 500=0.70% 00:19:05.491 cpu : usr=1.14%, sys=1.22%, ctx=5102, majf=0, minf=1 00:19:05.491 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.9%, >=64=98.3% 00:19:05.491 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:05.491 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:19:05.491 issued rwts: total=0,3704,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:05.491 latency : target=0, window=0, percentile=100.00%, depth=64 00:19:05.491 job6: (groupid=0, jobs=1): err= 0: pid=91428: Sun Dec 1 14:59:37 2024 00:19:05.491 write: IOPS=418, BW=105MiB/s (110MB/s)(1060MiB/10131msec); 0 zone resets 00:19:05.491 slat (usec): min=23, max=12424, avg=2322.99, stdev=4043.69 00:19:05.491 clat (msec): min=4, max=278, avg=150.49, stdev=18.35 00:19:05.491 lat (msec): min=4, max=278, avg=152.81, stdev=18.26 00:19:05.491 clat percentiles (msec): 00:19:05.491 | 1.00th=[ 47], 5.00th=[ 142], 10.00th=[ 144], 20.00th=[ 146], 00:19:05.491 | 30.00th=[ 153], 40.00th=[ 155], 50.00th=[ 155], 60.00th=[ 155], 00:19:05.491 | 70.00th=[ 155], 80.00th=[ 157], 90.00th=[ 157], 95.00th=[ 159], 00:19:05.491 | 99.00th=[ 171], 99.50th=[ 230], 99.90th=[ 271], 99.95th=[ 271], 00:19:05.491 | 99.99th=[ 279] 00:19:05.491 bw ( KiB/s): min=104239, max=128512, per=7.93%, avg=106935.80, stdev=5164.01, samples=20 00:19:05.491 iops : min= 407, max= 502, avg=417.65, stdev=20.19, samples=20 00:19:05.491 lat (msec) : 10=0.05%, 20=0.09%, 50=0.94%, 100=1.37%, 250=97.31% 00:19:05.491 lat (msec) : 500=0.24% 00:19:05.491 cpu : usr=0.95%, sys=1.02%, ctx=5510, majf=0, minf=1 00:19:05.491 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.5% 00:19:05.491 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:05.491 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:19:05.491 issued rwts: total=0,4241,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:05.491 latency : target=0, window=0, percentile=100.00%, depth=64 00:19:05.491 job7: (groupid=0, jobs=1): err= 0: pid=91429: Sun Dec 1 14:59:37 2024 00:19:05.491 write: IOPS=583, BW=146MiB/s (153MB/s)(1474MiB/10100msec); 0 zone resets 00:19:05.491 slat (usec): min=20, max=19818, avg=1690.19, stdev=2876.90 00:19:05.491 clat (msec): min=5, max=202, avg=107.89, stdev=13.88 00:19:05.491 lat (msec): min=5, max=202, avg=109.58, stdev=13.81 00:19:05.491 clat percentiles (msec): 00:19:05.491 | 1.00th=[ 97], 5.00th=[ 100], 10.00th=[ 101], 20.00th=[ 103], 00:19:05.491 | 30.00th=[ 105], 40.00th=[ 106], 50.00th=[ 107], 60.00th=[ 107], 00:19:05.491 | 70.00th=[ 108], 80.00th=[ 109], 90.00th=[ 112], 95.00th=[ 140], 00:19:05.491 | 99.00th=[ 161], 99.50th=[ 167], 99.90th=[ 190], 99.95th=[ 197], 00:19:05.491 | 99.99th=[ 203] 00:19:05.491 bw ( KiB/s): min=107008, max=156160, per=11.07%, avg=149305.00, stdev=11441.95, samples=20 00:19:05.491 iops : min= 418, max= 610, avg=583.10, stdev=44.68, samples=20 00:19:05.491 lat (msec) : 10=0.07%, 20=0.22%, 50=0.20%, 100=9.31%, 250=90.20% 00:19:05.491 cpu : usr=1.68%, sys=1.80%, ctx=7451, majf=0, minf=1 00:19:05.491 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=98.9% 00:19:05.491 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:05.491 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:19:05.491 issued rwts: total=0,5897,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:05.491 latency : target=0, window=0, percentile=100.00%, depth=64 00:19:05.491 job8: (groupid=0, jobs=1): err= 0: pid=91430: Sun Dec 1 14:59:37 2024 00:19:05.491 write: IOPS=696, BW=174MiB/s (183MB/s)(1754MiB/10070msec); 0 zone resets 00:19:05.491 slat (usec): min=20, max=12015, avg=1419.53, stdev=2383.65 00:19:05.491 clat (msec): min=15, max=154, avg=90.43, stdev= 5.50 00:19:05.491 lat (msec): min=15, max=155, avg=91.85, stdev= 5.13 00:19:05.491 clat percentiles (msec): 00:19:05.491 | 1.00th=[ 85], 5.00th=[ 86], 10.00th=[ 87], 20.00th=[ 88], 00:19:05.491 | 30.00th=[ 89], 40.00th=[ 91], 50.00th=[ 92], 60.00th=[ 92], 00:19:05.491 | 70.00th=[ 93], 80.00th=[ 93], 90.00th=[ 94], 95.00th=[ 94], 00:19:05.491 | 99.00th=[ 97], 99.50th=[ 110], 99.90th=[ 144], 99.95th=[ 150], 00:19:05.491 | 99.99th=[ 155] 00:19:05.491 bw ( KiB/s): min=169472, max=180736, per=13.19%, avg=177927.90, stdev=2440.10, samples=20 00:19:05.491 iops : min= 662, max= 706, avg=695.00, stdev= 9.55, samples=20 00:19:05.491 lat (msec) : 20=0.06%, 50=0.23%, 100=98.82%, 250=0.90% 00:19:05.491 cpu : usr=2.19%, sys=1.81%, ctx=8613, majf=0, minf=1 00:19:05.491 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.1% 00:19:05.491 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:05.491 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:19:05.491 issued rwts: total=0,7014,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:05.491 latency : target=0, window=0, percentile=100.00%, depth=64 00:19:05.491 job9: (groupid=0, jobs=1): err= 0: pid=91431: Sun Dec 1 14:59:37 2024 00:19:05.491 write: IOPS=371, BW=92.9MiB/s (97.4MB/s)(943MiB/10151msec); 0 zone resets 00:19:05.491 slat (usec): min=22, max=13301, avg=2631.25, stdev=4545.25 00:19:05.491 clat (msec): min=6, max=317, avg=169.57, stdev=19.18 00:19:05.491 lat (msec): min=6, max=317, avg=172.20, stdev=18.97 00:19:05.491 clat percentiles (msec): 00:19:05.491 | 1.00th=[ 69], 5.00th=[ 159], 10.00th=[ 163], 20.00th=[ 165], 00:19:05.491 | 30.00th=[ 167], 40.00th=[ 174], 50.00th=[ 174], 60.00th=[ 174], 00:19:05.491 | 70.00th=[ 176], 80.00th=[ 176], 90.00th=[ 178], 95.00th=[ 178], 00:19:05.491 | 99.00th=[ 222], 99.50th=[ 264], 99.90th=[ 309], 99.95th=[ 317], 00:19:05.491 | 99.99th=[ 317] 00:19:05.491 bw ( KiB/s): min=92160, max=110080, per=7.03%, avg=94915.50, stdev=3744.74, samples=20 00:19:05.491 iops : min= 360, max= 430, avg=370.70, stdev=14.65, samples=20 00:19:05.491 lat (msec) : 10=0.11%, 20=0.08%, 50=0.37%, 100=0.80%, 250=97.96% 00:19:05.491 lat (msec) : 500=0.69% 00:19:05.491 cpu : usr=0.85%, sys=1.17%, ctx=3786, majf=0, minf=1 00:19:05.491 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.3% 00:19:05.491 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:05.491 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:19:05.491 issued rwts: total=0,3771,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:05.491 latency : target=0, window=0, percentile=100.00%, depth=64 00:19:05.491 job10: (groupid=0, jobs=1): err= 0: pid=91432: Sun Dec 1 14:59:37 2024 00:19:05.491 write: IOPS=424, BW=106MiB/s (111MB/s)(1075MiB/10136msec); 0 zone resets 00:19:05.491 slat (usec): min=22, max=11452, avg=2321.09, stdev=3980.65 00:19:05.491 clat (msec): min=5, max=284, avg=148.50, stdev=19.10 00:19:05.491 lat (msec): min=5, max=284, avg=150.82, stdev=18.98 00:19:05.491 clat percentiles (msec): 00:19:05.491 | 1.00th=[ 79], 5.00th=[ 108], 10.00th=[ 142], 20.00th=[ 146], 00:19:05.491 | 30.00th=[ 148], 40.00th=[ 153], 50.00th=[ 155], 60.00th=[ 155], 00:19:05.491 | 70.00th=[ 155], 80.00th=[ 157], 90.00th=[ 157], 95.00th=[ 159], 00:19:05.491 | 99.00th=[ 178], 99.50th=[ 236], 99.90th=[ 275], 99.95th=[ 275], 00:19:05.491 | 99.99th=[ 284] 00:19:05.491 bw ( KiB/s): min=103825, max=148480, per=8.03%, avg=108378.70, stdev=9848.91, samples=20 00:19:05.491 iops : min= 405, max= 580, avg=423.30, stdev=38.49, samples=20 00:19:05.491 lat (msec) : 10=0.09%, 20=0.09%, 50=0.37%, 100=0.84%, 250=98.28% 00:19:05.491 lat (msec) : 500=0.33% 00:19:05.491 cpu : usr=0.90%, sys=1.54%, ctx=3894, majf=0, minf=1 00:19:05.491 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.7%, >=64=98.5% 00:19:05.491 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:05.491 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:19:05.491 issued rwts: total=0,4299,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:05.491 latency : target=0, window=0, percentile=100.00%, depth=64 00:19:05.491 00:19:05.491 Run status group 0 (all jobs): 00:19:05.491 WRITE: bw=1318MiB/s (1382MB/s), 91.2MiB/s-174MiB/s (95.7MB/s-183MB/s), io=13.1GiB (14.0GB), run=10070-10160msec 00:19:05.491 00:19:05.491 Disk stats (read/write): 00:19:05.491 nvme0n1: ios=49/11569, merge=0/0, ticks=206/1209430, in_queue=1209636, util=97.98% 00:19:05.491 nvme10n1: ios=49/13881, merge=0/0, ticks=53/1214517, in_queue=1214570, util=98.05% 00:19:05.491 nvme1n1: ios=24/7280, merge=0/0, ticks=27/1209879, in_queue=1209906, util=97.93% 00:19:05.491 nvme2n1: ios=13/8435, merge=0/0, ticks=19/1208599, in_queue=1208618, util=97.91% 00:19:05.491 nvme3n1: ios=0/7290, merge=0/0, ticks=0/1208212, in_queue=1208212, util=97.92% 00:19:05.491 nvme4n1: ios=0/7252, merge=0/0, ticks=0/1207674, in_queue=1207674, util=98.18% 00:19:05.491 nvme5n1: ios=0/8322, merge=0/0, ticks=0/1208952, in_queue=1208952, util=98.28% 00:19:05.491 nvme6n1: ios=0/11644, merge=0/0, ticks=0/1213048, in_queue=1213048, util=98.58% 00:19:05.491 nvme7n1: ios=0/13832, merge=0/0, ticks=0/1210971, in_queue=1210971, util=98.55% 00:19:05.491 nvme8n1: ios=0/7389, merge=0/0, ticks=0/1208632, in_queue=1208632, util=98.78% 00:19:05.491 nvme9n1: ios=0/8444, merge=0/0, ticks=0/1209205, in_queue=1209205, util=98.91% 00:19:05.491 14:59:37 -- target/multiconnection.sh@36 -- # sync 00:19:05.491 14:59:37 -- target/multiconnection.sh@37 -- # seq 1 11 00:19:05.491 14:59:37 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:19:05.491 14:59:37 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:19:05.491 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:19:05.491 14:59:37 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK1 00:19:05.491 14:59:37 -- common/autotest_common.sh@1208 -- # local i=0 00:19:05.491 14:59:37 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:19:05.491 14:59:37 -- common/autotest_common.sh@1209 -- # grep -q -w SPDK1 00:19:05.491 14:59:37 -- common/autotest_common.sh@1216 -- # grep -q -w SPDK1 00:19:05.491 14:59:37 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:19:05.491 14:59:37 -- common/autotest_common.sh@1220 -- # return 0 00:19:05.491 14:59:37 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:19:05.491 14:59:37 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:05.492 14:59:37 -- common/autotest_common.sh@10 -- # set +x 00:19:05.492 14:59:37 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:05.492 14:59:37 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:19:05.492 14:59:37 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode2 00:19:05.492 NQN:nqn.2016-06.io.spdk:cnode2 disconnected 1 controller(s) 00:19:05.492 14:59:37 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK2 00:19:05.492 14:59:37 -- common/autotest_common.sh@1208 -- # local i=0 00:19:05.492 14:59:37 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:19:05.492 14:59:37 -- common/autotest_common.sh@1209 -- # grep -q -w SPDK2 00:19:05.492 14:59:37 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:19:05.492 14:59:37 -- common/autotest_common.sh@1216 -- # grep -q -w SPDK2 00:19:05.492 14:59:37 -- common/autotest_common.sh@1220 -- # return 0 00:19:05.492 14:59:37 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:19:05.492 14:59:37 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:05.492 14:59:37 -- common/autotest_common.sh@10 -- # set +x 00:19:05.492 14:59:37 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:05.492 14:59:37 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:19:05.492 14:59:37 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode3 00:19:05.492 NQN:nqn.2016-06.io.spdk:cnode3 disconnected 1 controller(s) 00:19:05.492 14:59:37 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK3 00:19:05.492 14:59:37 -- common/autotest_common.sh@1208 -- # local i=0 00:19:05.492 14:59:37 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:19:05.492 14:59:37 -- common/autotest_common.sh@1209 -- # grep -q -w SPDK3 00:19:05.492 14:59:37 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:19:05.492 14:59:37 -- common/autotest_common.sh@1216 -- # grep -q -w SPDK3 00:19:05.492 14:59:37 -- common/autotest_common.sh@1220 -- # return 0 00:19:05.492 14:59:37 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:19:05.492 14:59:37 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:05.492 14:59:37 -- common/autotest_common.sh@10 -- # set +x 00:19:05.492 14:59:37 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:05.492 14:59:37 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:19:05.492 14:59:37 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode4 00:19:05.492 NQN:nqn.2016-06.io.spdk:cnode4 disconnected 1 controller(s) 00:19:05.492 14:59:37 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK4 00:19:05.492 14:59:37 -- common/autotest_common.sh@1208 -- # local i=0 00:19:05.492 14:59:37 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:19:05.492 14:59:37 -- common/autotest_common.sh@1209 -- # grep -q -w SPDK4 00:19:05.492 14:59:37 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:19:05.492 14:59:37 -- common/autotest_common.sh@1216 -- # grep -q -w SPDK4 00:19:05.492 14:59:37 -- common/autotest_common.sh@1220 -- # return 0 00:19:05.492 14:59:37 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:19:05.492 14:59:37 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:05.492 14:59:37 -- common/autotest_common.sh@10 -- # set +x 00:19:05.492 14:59:37 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:05.492 14:59:37 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:19:05.492 14:59:37 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode5 00:19:05.492 NQN:nqn.2016-06.io.spdk:cnode5 disconnected 1 controller(s) 00:19:05.492 14:59:38 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK5 00:19:05.492 14:59:38 -- common/autotest_common.sh@1208 -- # local i=0 00:19:05.492 14:59:38 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:19:05.492 14:59:38 -- common/autotest_common.sh@1209 -- # grep -q -w SPDK5 00:19:05.492 14:59:38 -- common/autotest_common.sh@1216 -- # grep -q -w SPDK5 00:19:05.492 14:59:38 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:19:05.492 14:59:38 -- common/autotest_common.sh@1220 -- # return 0 00:19:05.492 14:59:38 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode5 00:19:05.492 14:59:38 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:05.492 14:59:38 -- common/autotest_common.sh@10 -- # set +x 00:19:05.492 14:59:38 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:05.492 14:59:38 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:19:05.492 14:59:38 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode6 00:19:05.492 NQN:nqn.2016-06.io.spdk:cnode6 disconnected 1 controller(s) 00:19:05.492 14:59:38 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK6 00:19:05.492 14:59:38 -- common/autotest_common.sh@1208 -- # local i=0 00:19:05.492 14:59:38 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:19:05.492 14:59:38 -- common/autotest_common.sh@1209 -- # grep -q -w SPDK6 00:19:05.492 14:59:38 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:19:05.492 14:59:38 -- common/autotest_common.sh@1216 -- # grep -q -w SPDK6 00:19:05.492 14:59:38 -- common/autotest_common.sh@1220 -- # return 0 00:19:05.492 14:59:38 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode6 00:19:05.492 14:59:38 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:05.492 14:59:38 -- common/autotest_common.sh@10 -- # set +x 00:19:05.492 14:59:38 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:05.492 14:59:38 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:19:05.492 14:59:38 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode7 00:19:05.492 NQN:nqn.2016-06.io.spdk:cnode7 disconnected 1 controller(s) 00:19:05.492 14:59:38 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK7 00:19:05.492 14:59:38 -- common/autotest_common.sh@1208 -- # local i=0 00:19:05.492 14:59:38 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:19:05.492 14:59:38 -- common/autotest_common.sh@1209 -- # grep -q -w SPDK7 00:19:05.492 14:59:38 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:19:05.492 14:59:38 -- common/autotest_common.sh@1216 -- # grep -q -w SPDK7 00:19:05.492 14:59:38 -- common/autotest_common.sh@1220 -- # return 0 00:19:05.492 14:59:38 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode7 00:19:05.492 14:59:38 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:05.492 14:59:38 -- common/autotest_common.sh@10 -- # set +x 00:19:05.492 14:59:38 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:05.492 14:59:38 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:19:05.492 14:59:38 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode8 00:19:05.492 NQN:nqn.2016-06.io.spdk:cnode8 disconnected 1 controller(s) 00:19:05.492 14:59:38 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK8 00:19:05.492 14:59:38 -- common/autotest_common.sh@1208 -- # local i=0 00:19:05.492 14:59:38 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:19:05.492 14:59:38 -- common/autotest_common.sh@1209 -- # grep -q -w SPDK8 00:19:05.492 14:59:38 -- common/autotest_common.sh@1216 -- # grep -q -w SPDK8 00:19:05.492 14:59:38 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:19:05.492 14:59:38 -- common/autotest_common.sh@1220 -- # return 0 00:19:05.492 14:59:38 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode8 00:19:05.492 14:59:38 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:05.492 14:59:38 -- common/autotest_common.sh@10 -- # set +x 00:19:05.492 14:59:38 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:05.492 14:59:38 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:19:05.492 14:59:38 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode9 00:19:05.492 NQN:nqn.2016-06.io.spdk:cnode9 disconnected 1 controller(s) 00:19:05.492 14:59:38 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK9 00:19:05.492 14:59:38 -- common/autotest_common.sh@1208 -- # local i=0 00:19:05.492 14:59:38 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:19:05.492 14:59:38 -- common/autotest_common.sh@1209 -- # grep -q -w SPDK9 00:19:05.752 14:59:38 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:19:05.752 14:59:38 -- common/autotest_common.sh@1216 -- # grep -q -w SPDK9 00:19:05.752 14:59:38 -- common/autotest_common.sh@1220 -- # return 0 00:19:05.752 14:59:38 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode9 00:19:05.752 14:59:38 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:05.752 14:59:38 -- common/autotest_common.sh@10 -- # set +x 00:19:05.752 14:59:38 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:05.752 14:59:38 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:19:05.752 14:59:38 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode10 00:19:05.752 NQN:nqn.2016-06.io.spdk:cnode10 disconnected 1 controller(s) 00:19:05.752 14:59:38 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK10 00:19:05.752 14:59:38 -- common/autotest_common.sh@1208 -- # local i=0 00:19:05.752 14:59:38 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:19:05.752 14:59:38 -- common/autotest_common.sh@1209 -- # grep -q -w SPDK10 00:19:05.752 14:59:38 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:19:05.752 14:59:38 -- common/autotest_common.sh@1216 -- # grep -q -w SPDK10 00:19:05.752 14:59:38 -- common/autotest_common.sh@1220 -- # return 0 00:19:05.752 14:59:38 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode10 00:19:05.752 14:59:38 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:05.752 14:59:38 -- common/autotest_common.sh@10 -- # set +x 00:19:05.752 14:59:38 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:05.752 14:59:38 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:19:05.752 14:59:38 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode11 00:19:05.752 NQN:nqn.2016-06.io.spdk:cnode11 disconnected 1 controller(s) 00:19:05.752 14:59:38 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK11 00:19:05.752 14:59:38 -- common/autotest_common.sh@1208 -- # local i=0 00:19:05.752 14:59:38 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:19:05.752 14:59:38 -- common/autotest_common.sh@1209 -- # grep -q -w SPDK11 00:19:06.012 14:59:38 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:19:06.012 14:59:38 -- common/autotest_common.sh@1216 -- # grep -q -w SPDK11 00:19:06.012 14:59:38 -- common/autotest_common.sh@1220 -- # return 0 00:19:06.012 14:59:38 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode11 00:19:06.012 14:59:38 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:06.012 14:59:38 -- common/autotest_common.sh@10 -- # set +x 00:19:06.012 14:59:38 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:06.012 14:59:38 -- target/multiconnection.sh@43 -- # rm -f ./local-job0-0-verify.state 00:19:06.012 14:59:38 -- target/multiconnection.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:19:06.012 14:59:38 -- target/multiconnection.sh@47 -- # nvmftestfini 00:19:06.012 14:59:38 -- nvmf/common.sh@476 -- # nvmfcleanup 00:19:06.012 14:59:38 -- nvmf/common.sh@116 -- # sync 00:19:06.012 14:59:38 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:19:06.012 14:59:38 -- nvmf/common.sh@119 -- # set +e 00:19:06.012 14:59:38 -- nvmf/common.sh@120 -- # for i in {1..20} 00:19:06.012 14:59:38 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:19:06.012 rmmod nvme_tcp 00:19:06.012 rmmod nvme_fabrics 00:19:06.012 rmmod nvme_keyring 00:19:06.012 14:59:38 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:19:06.012 14:59:38 -- nvmf/common.sh@123 -- # set -e 00:19:06.012 14:59:38 -- nvmf/common.sh@124 -- # return 0 00:19:06.012 14:59:38 -- nvmf/common.sh@477 -- # '[' -n 90723 ']' 00:19:06.012 14:59:38 -- nvmf/common.sh@478 -- # killprocess 90723 00:19:06.012 14:59:38 -- common/autotest_common.sh@936 -- # '[' -z 90723 ']' 00:19:06.012 14:59:38 -- common/autotest_common.sh@940 -- # kill -0 90723 00:19:06.012 14:59:38 -- common/autotest_common.sh@941 -- # uname 00:19:06.012 14:59:38 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:19:06.012 14:59:38 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 90723 00:19:06.012 killing process with pid 90723 00:19:06.012 14:59:39 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:19:06.012 14:59:39 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:19:06.012 14:59:39 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 90723' 00:19:06.012 14:59:39 -- common/autotest_common.sh@955 -- # kill 90723 00:19:06.012 14:59:39 -- common/autotest_common.sh@960 -- # wait 90723 00:19:06.580 14:59:39 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:19:06.580 14:59:39 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:19:06.580 14:59:39 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:19:06.580 14:59:39 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:19:06.580 14:59:39 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:19:06.580 14:59:39 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:06.580 14:59:39 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:06.580 14:59:39 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:06.580 14:59:39 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:19:06.580 ************************************ 00:19:06.580 END TEST nvmf_multiconnection 00:19:06.580 ************************************ 00:19:06.580 00:19:06.580 real 0m50.097s 00:19:06.580 user 2m51.389s 00:19:06.580 sys 0m23.775s 00:19:06.580 14:59:39 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:19:06.580 14:59:39 -- common/autotest_common.sh@10 -- # set +x 00:19:06.580 14:59:39 -- nvmf/nvmf.sh@66 -- # run_test nvmf_initiator_timeout /home/vagrant/spdk_repo/spdk/test/nvmf/target/initiator_timeout.sh --transport=tcp 00:19:06.580 14:59:39 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:19:06.580 14:59:39 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:19:06.580 14:59:39 -- common/autotest_common.sh@10 -- # set +x 00:19:06.580 ************************************ 00:19:06.580 START TEST nvmf_initiator_timeout 00:19:06.580 ************************************ 00:19:06.580 14:59:39 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/initiator_timeout.sh --transport=tcp 00:19:06.580 * Looking for test storage... 00:19:06.580 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:19:06.580 14:59:39 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:19:06.580 14:59:39 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:19:06.580 14:59:39 -- common/autotest_common.sh@1690 -- # lcov --version 00:19:06.840 14:59:39 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:19:06.840 14:59:39 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:19:06.840 14:59:39 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:19:06.840 14:59:39 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:19:06.840 14:59:39 -- scripts/common.sh@335 -- # IFS=.-: 00:19:06.840 14:59:39 -- scripts/common.sh@335 -- # read -ra ver1 00:19:06.840 14:59:39 -- scripts/common.sh@336 -- # IFS=.-: 00:19:06.840 14:59:39 -- scripts/common.sh@336 -- # read -ra ver2 00:19:06.840 14:59:39 -- scripts/common.sh@337 -- # local 'op=<' 00:19:06.840 14:59:39 -- scripts/common.sh@339 -- # ver1_l=2 00:19:06.840 14:59:39 -- scripts/common.sh@340 -- # ver2_l=1 00:19:06.840 14:59:39 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:19:06.840 14:59:39 -- scripts/common.sh@343 -- # case "$op" in 00:19:06.840 14:59:39 -- scripts/common.sh@344 -- # : 1 00:19:06.840 14:59:39 -- scripts/common.sh@363 -- # (( v = 0 )) 00:19:06.840 14:59:39 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:06.840 14:59:39 -- scripts/common.sh@364 -- # decimal 1 00:19:06.840 14:59:39 -- scripts/common.sh@352 -- # local d=1 00:19:06.840 14:59:39 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:06.840 14:59:39 -- scripts/common.sh@354 -- # echo 1 00:19:06.840 14:59:39 -- scripts/common.sh@364 -- # ver1[v]=1 00:19:06.840 14:59:39 -- scripts/common.sh@365 -- # decimal 2 00:19:06.840 14:59:39 -- scripts/common.sh@352 -- # local d=2 00:19:06.840 14:59:39 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:06.840 14:59:39 -- scripts/common.sh@354 -- # echo 2 00:19:06.840 14:59:39 -- scripts/common.sh@365 -- # ver2[v]=2 00:19:06.840 14:59:39 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:19:06.840 14:59:39 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:19:06.840 14:59:39 -- scripts/common.sh@367 -- # return 0 00:19:06.840 14:59:39 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:06.840 14:59:39 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:19:06.840 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:06.840 --rc genhtml_branch_coverage=1 00:19:06.840 --rc genhtml_function_coverage=1 00:19:06.840 --rc genhtml_legend=1 00:19:06.840 --rc geninfo_all_blocks=1 00:19:06.841 --rc geninfo_unexecuted_blocks=1 00:19:06.841 00:19:06.841 ' 00:19:06.841 14:59:39 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:19:06.841 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:06.841 --rc genhtml_branch_coverage=1 00:19:06.841 --rc genhtml_function_coverage=1 00:19:06.841 --rc genhtml_legend=1 00:19:06.841 --rc geninfo_all_blocks=1 00:19:06.841 --rc geninfo_unexecuted_blocks=1 00:19:06.841 00:19:06.841 ' 00:19:06.841 14:59:39 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:19:06.841 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:06.841 --rc genhtml_branch_coverage=1 00:19:06.841 --rc genhtml_function_coverage=1 00:19:06.841 --rc genhtml_legend=1 00:19:06.841 --rc geninfo_all_blocks=1 00:19:06.841 --rc geninfo_unexecuted_blocks=1 00:19:06.841 00:19:06.841 ' 00:19:06.841 14:59:39 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:19:06.841 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:06.841 --rc genhtml_branch_coverage=1 00:19:06.841 --rc genhtml_function_coverage=1 00:19:06.841 --rc genhtml_legend=1 00:19:06.841 --rc geninfo_all_blocks=1 00:19:06.841 --rc geninfo_unexecuted_blocks=1 00:19:06.841 00:19:06.841 ' 00:19:06.841 14:59:39 -- target/initiator_timeout.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:19:06.841 14:59:39 -- nvmf/common.sh@7 -- # uname -s 00:19:06.841 14:59:39 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:06.841 14:59:39 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:06.841 14:59:39 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:06.841 14:59:39 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:06.841 14:59:39 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:06.841 14:59:39 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:06.841 14:59:39 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:06.841 14:59:39 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:06.841 14:59:39 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:06.841 14:59:39 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:06.841 14:59:39 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:2d843004-a791-47f3-8dd7-3d04462c368b 00:19:06.841 14:59:39 -- nvmf/common.sh@18 -- # NVME_HOSTID=2d843004-a791-47f3-8dd7-3d04462c368b 00:19:06.841 14:59:39 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:06.841 14:59:39 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:06.841 14:59:39 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:19:06.841 14:59:39 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:19:06.841 14:59:39 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:06.841 14:59:39 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:06.841 14:59:39 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:06.841 14:59:39 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:06.841 14:59:39 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:06.841 14:59:39 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:06.841 14:59:39 -- paths/export.sh@5 -- # export PATH 00:19:06.841 14:59:39 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:06.841 14:59:39 -- nvmf/common.sh@46 -- # : 0 00:19:06.841 14:59:39 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:19:06.841 14:59:39 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:19:06.841 14:59:39 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:19:06.841 14:59:39 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:06.841 14:59:39 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:06.841 14:59:39 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:19:06.841 14:59:39 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:19:06.841 14:59:39 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:19:06.841 14:59:39 -- target/initiator_timeout.sh@11 -- # MALLOC_BDEV_SIZE=64 00:19:06.841 14:59:39 -- target/initiator_timeout.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:19:06.841 14:59:39 -- target/initiator_timeout.sh@14 -- # nvmftestinit 00:19:06.841 14:59:39 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:19:06.841 14:59:39 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:06.841 14:59:39 -- nvmf/common.sh@436 -- # prepare_net_devs 00:19:06.841 14:59:39 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:19:06.841 14:59:39 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:19:06.841 14:59:39 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:06.841 14:59:39 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:06.841 14:59:39 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:06.841 14:59:39 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:19:06.841 14:59:39 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:19:06.841 14:59:39 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:19:06.841 14:59:39 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:19:06.841 14:59:39 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:19:06.841 14:59:39 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:19:06.841 14:59:39 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:06.841 14:59:39 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:06.841 14:59:39 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:19:06.841 14:59:39 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:19:06.841 14:59:39 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:19:06.841 14:59:39 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:19:06.841 14:59:39 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:19:06.841 14:59:39 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:06.841 14:59:39 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:19:06.841 14:59:39 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:19:06.841 14:59:39 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:19:06.841 14:59:39 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:19:06.841 14:59:39 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:19:06.841 14:59:39 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:19:06.841 Cannot find device "nvmf_tgt_br" 00:19:06.841 14:59:39 -- nvmf/common.sh@154 -- # true 00:19:06.841 14:59:39 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:19:06.841 Cannot find device "nvmf_tgt_br2" 00:19:06.841 14:59:39 -- nvmf/common.sh@155 -- # true 00:19:06.841 14:59:39 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:19:06.841 14:59:39 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:19:06.841 Cannot find device "nvmf_tgt_br" 00:19:06.841 14:59:39 -- nvmf/common.sh@157 -- # true 00:19:06.841 14:59:39 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:19:06.841 Cannot find device "nvmf_tgt_br2" 00:19:06.841 14:59:39 -- nvmf/common.sh@158 -- # true 00:19:06.841 14:59:39 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:19:06.841 14:59:39 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:19:06.841 14:59:39 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:19:06.841 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:19:06.841 14:59:39 -- nvmf/common.sh@161 -- # true 00:19:06.841 14:59:39 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:19:06.841 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:19:06.841 14:59:39 -- nvmf/common.sh@162 -- # true 00:19:06.841 14:59:39 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:19:06.841 14:59:39 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:19:06.841 14:59:39 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:19:06.841 14:59:39 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:19:06.841 14:59:39 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:19:06.841 14:59:39 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:19:06.841 14:59:39 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:19:06.841 14:59:39 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:19:06.841 14:59:39 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:19:06.841 14:59:39 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:19:06.841 14:59:39 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:19:06.841 14:59:39 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:19:06.841 14:59:39 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:19:07.101 14:59:39 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:19:07.101 14:59:39 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:19:07.101 14:59:39 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:19:07.101 14:59:39 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:19:07.101 14:59:39 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:19:07.101 14:59:39 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:19:07.101 14:59:39 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:19:07.101 14:59:40 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:19:07.101 14:59:40 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:19:07.101 14:59:40 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:19:07.101 14:59:40 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:19:07.101 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:07.101 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.074 ms 00:19:07.101 00:19:07.101 --- 10.0.0.2 ping statistics --- 00:19:07.101 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:07.101 rtt min/avg/max/mdev = 0.074/0.074/0.074/0.000 ms 00:19:07.101 14:59:40 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:19:07.101 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:19:07.101 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.066 ms 00:19:07.101 00:19:07.101 --- 10.0.0.3 ping statistics --- 00:19:07.101 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:07.101 rtt min/avg/max/mdev = 0.066/0.066/0.066/0.000 ms 00:19:07.101 14:59:40 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:19:07.101 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:07.101 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.026 ms 00:19:07.101 00:19:07.101 --- 10.0.0.1 ping statistics --- 00:19:07.101 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:07.101 rtt min/avg/max/mdev = 0.026/0.026/0.026/0.000 ms 00:19:07.101 14:59:40 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:07.101 14:59:40 -- nvmf/common.sh@421 -- # return 0 00:19:07.101 14:59:40 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:19:07.101 14:59:40 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:07.101 14:59:40 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:19:07.101 14:59:40 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:19:07.101 14:59:40 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:07.101 14:59:40 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:19:07.101 14:59:40 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:19:07.101 14:59:40 -- target/initiator_timeout.sh@15 -- # nvmfappstart -m 0xF 00:19:07.101 14:59:40 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:19:07.101 14:59:40 -- common/autotest_common.sh@722 -- # xtrace_disable 00:19:07.101 14:59:40 -- common/autotest_common.sh@10 -- # set +x 00:19:07.101 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:07.101 14:59:40 -- nvmf/common.sh@469 -- # nvmfpid=91810 00:19:07.101 14:59:40 -- nvmf/common.sh@470 -- # waitforlisten 91810 00:19:07.101 14:59:40 -- common/autotest_common.sh@829 -- # '[' -z 91810 ']' 00:19:07.101 14:59:40 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:07.101 14:59:40 -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:07.101 14:59:40 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:07.101 14:59:40 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:19:07.101 14:59:40 -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:07.101 14:59:40 -- common/autotest_common.sh@10 -- # set +x 00:19:07.101 [2024-12-01 14:59:40.138310] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:19:07.101 [2024-12-01 14:59:40.138394] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:07.360 [2024-12-01 14:59:40.276445] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:19:07.360 [2024-12-01 14:59:40.332481] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:19:07.360 [2024-12-01 14:59:40.332619] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:07.360 [2024-12-01 14:59:40.332631] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:07.360 [2024-12-01 14:59:40.332638] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:07.360 [2024-12-01 14:59:40.332996] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:19:07.360 [2024-12-01 14:59:40.333042] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:19:07.360 [2024-12-01 14:59:40.333354] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:19:07.360 [2024-12-01 14:59:40.333364] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:19:08.296 14:59:41 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:08.296 14:59:41 -- common/autotest_common.sh@862 -- # return 0 00:19:08.296 14:59:41 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:19:08.296 14:59:41 -- common/autotest_common.sh@728 -- # xtrace_disable 00:19:08.296 14:59:41 -- common/autotest_common.sh@10 -- # set +x 00:19:08.296 14:59:41 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:08.296 14:59:41 -- target/initiator_timeout.sh@17 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $nvmfpid; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:19:08.297 14:59:41 -- target/initiator_timeout.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:19:08.297 14:59:41 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:08.297 14:59:41 -- common/autotest_common.sh@10 -- # set +x 00:19:08.297 Malloc0 00:19:08.297 14:59:41 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:08.297 14:59:41 -- target/initiator_timeout.sh@22 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 30 -t 30 -w 30 -n 30 00:19:08.297 14:59:41 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:08.297 14:59:41 -- common/autotest_common.sh@10 -- # set +x 00:19:08.297 Delay0 00:19:08.297 14:59:41 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:08.297 14:59:41 -- target/initiator_timeout.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:19:08.297 14:59:41 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:08.297 14:59:41 -- common/autotest_common.sh@10 -- # set +x 00:19:08.297 [2024-12-01 14:59:41.201594] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:08.297 14:59:41 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:08.297 14:59:41 -- target/initiator_timeout.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:19:08.297 14:59:41 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:08.297 14:59:41 -- common/autotest_common.sh@10 -- # set +x 00:19:08.297 14:59:41 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:08.297 14:59:41 -- target/initiator_timeout.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:19:08.297 14:59:41 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:08.297 14:59:41 -- common/autotest_common.sh@10 -- # set +x 00:19:08.297 14:59:41 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:08.297 14:59:41 -- target/initiator_timeout.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:19:08.297 14:59:41 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:08.297 14:59:41 -- common/autotest_common.sh@10 -- # set +x 00:19:08.297 [2024-12-01 14:59:41.229846] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:08.297 14:59:41 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:08.297 14:59:41 -- target/initiator_timeout.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:2d843004-a791-47f3-8dd7-3d04462c368b --hostid=2d843004-a791-47f3-8dd7-3d04462c368b -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:19:08.297 14:59:41 -- target/initiator_timeout.sh@31 -- # waitforserial SPDKISFASTANDAWESOME 00:19:08.297 14:59:41 -- common/autotest_common.sh@1187 -- # local i=0 00:19:08.297 14:59:41 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:19:08.297 14:59:41 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:19:08.297 14:59:41 -- common/autotest_common.sh@1194 -- # sleep 2 00:19:10.830 14:59:43 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:19:10.830 14:59:43 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:19:10.830 14:59:43 -- common/autotest_common.sh@1196 -- # grep -c SPDKISFASTANDAWESOME 00:19:10.830 14:59:43 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:19:10.830 14:59:43 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:19:10.830 14:59:43 -- common/autotest_common.sh@1197 -- # return 0 00:19:10.830 14:59:43 -- target/initiator_timeout.sh@35 -- # fio_pid=91891 00:19:10.830 14:59:43 -- target/initiator_timeout.sh@37 -- # sleep 3 00:19:10.830 14:59:43 -- target/initiator_timeout.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 60 -v 00:19:10.830 [global] 00:19:10.830 thread=1 00:19:10.830 invalidate=1 00:19:10.831 rw=write 00:19:10.831 time_based=1 00:19:10.831 runtime=60 00:19:10.831 ioengine=libaio 00:19:10.831 direct=1 00:19:10.831 bs=4096 00:19:10.831 iodepth=1 00:19:10.831 norandommap=0 00:19:10.831 numjobs=1 00:19:10.831 00:19:10.831 verify_dump=1 00:19:10.831 verify_backlog=512 00:19:10.831 verify_state_save=0 00:19:10.831 do_verify=1 00:19:10.831 verify=crc32c-intel 00:19:10.831 [job0] 00:19:10.831 filename=/dev/nvme0n1 00:19:10.831 Could not set queue depth (nvme0n1) 00:19:10.831 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:19:10.831 fio-3.35 00:19:10.831 Starting 1 thread 00:19:13.362 14:59:46 -- target/initiator_timeout.sh@40 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_read 31000000 00:19:13.362 14:59:46 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:13.362 14:59:46 -- common/autotest_common.sh@10 -- # set +x 00:19:13.362 true 00:19:13.362 14:59:46 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:13.362 14:59:46 -- target/initiator_timeout.sh@41 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_write 31000000 00:19:13.362 14:59:46 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:13.362 14:59:46 -- common/autotest_common.sh@10 -- # set +x 00:19:13.362 true 00:19:13.362 14:59:46 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:13.362 14:59:46 -- target/initiator_timeout.sh@42 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_read 31000000 00:19:13.362 14:59:46 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:13.362 14:59:46 -- common/autotest_common.sh@10 -- # set +x 00:19:13.362 true 00:19:13.362 14:59:46 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:13.362 14:59:46 -- target/initiator_timeout.sh@43 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_write 310000000 00:19:13.362 14:59:46 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:13.362 14:59:46 -- common/autotest_common.sh@10 -- # set +x 00:19:13.362 true 00:19:13.362 14:59:46 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:13.362 14:59:46 -- target/initiator_timeout.sh@45 -- # sleep 3 00:19:16.648 14:59:49 -- target/initiator_timeout.sh@48 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_read 30 00:19:16.648 14:59:49 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:16.648 14:59:49 -- common/autotest_common.sh@10 -- # set +x 00:19:16.648 true 00:19:16.648 14:59:49 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:16.648 14:59:49 -- target/initiator_timeout.sh@49 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_write 30 00:19:16.648 14:59:49 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:16.648 14:59:49 -- common/autotest_common.sh@10 -- # set +x 00:19:16.648 true 00:19:16.648 14:59:49 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:16.648 14:59:49 -- target/initiator_timeout.sh@50 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_read 30 00:19:16.648 14:59:49 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:16.648 14:59:49 -- common/autotest_common.sh@10 -- # set +x 00:19:16.648 true 00:19:16.648 14:59:49 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:16.648 14:59:49 -- target/initiator_timeout.sh@51 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_write 30 00:19:16.648 14:59:49 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:16.648 14:59:49 -- common/autotest_common.sh@10 -- # set +x 00:19:16.648 true 00:19:16.648 14:59:49 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:16.648 14:59:49 -- target/initiator_timeout.sh@53 -- # fio_status=0 00:19:16.648 14:59:49 -- target/initiator_timeout.sh@54 -- # wait 91891 00:20:12.906 00:20:12.906 job0: (groupid=0, jobs=1): err= 0: pid=91913: Sun Dec 1 15:00:43 2024 00:20:12.906 read: IOPS=816, BW=3264KiB/s (3343kB/s)(191MiB/60000msec) 00:20:12.906 slat (usec): min=10, max=10403, avg=14.59, stdev=61.68 00:20:12.906 clat (usec): min=148, max=40777k, avg=1031.35, stdev=184273.13 00:20:12.906 lat (usec): min=169, max=40777k, avg=1045.95, stdev=184273.24 00:20:12.906 clat percentiles (usec): 00:20:12.906 | 1.00th=[ 169], 5.00th=[ 176], 10.00th=[ 180], 20.00th=[ 184], 00:20:12.906 | 30.00th=[ 188], 40.00th=[ 192], 50.00th=[ 196], 60.00th=[ 198], 00:20:12.906 | 70.00th=[ 204], 80.00th=[ 210], 90.00th=[ 223], 95.00th=[ 235], 00:20:12.906 | 99.00th=[ 260], 99.50th=[ 273], 99.90th=[ 330], 99.95th=[ 375], 00:20:12.906 | 99.99th=[ 709] 00:20:12.906 write: IOPS=819, BW=3277KiB/s (3355kB/s)(192MiB/60000msec); 0 zone resets 00:20:12.906 slat (usec): min=17, max=633, avg=20.53, stdev= 5.45 00:20:12.906 clat (usec): min=111, max=1250, avg=155.39, stdev=19.62 00:20:12.906 lat (usec): min=138, max=1287, avg=175.92, stdev=20.83 00:20:12.906 clat percentiles (usec): 00:20:12.906 | 1.00th=[ 131], 5.00th=[ 137], 10.00th=[ 139], 20.00th=[ 143], 00:20:12.906 | 30.00th=[ 147], 40.00th=[ 149], 50.00th=[ 153], 60.00th=[ 155], 00:20:12.906 | 70.00th=[ 159], 80.00th=[ 165], 90.00th=[ 178], 95.00th=[ 188], 00:20:12.906 | 99.00th=[ 212], 99.50th=[ 229], 99.90th=[ 285], 99.95th=[ 351], 00:20:12.906 | 99.99th=[ 668] 00:20:12.906 bw ( KiB/s): min= 6096, max=12288, per=100.00%, avg=10131.74, stdev=1896.14, samples=38 00:20:12.906 iops : min= 1524, max= 3072, avg=2532.92, stdev=474.04, samples=38 00:20:12.906 lat (usec) : 250=99.00%, 500=0.98%, 750=0.02%, 1000=0.01% 00:20:12.906 lat (msec) : 2=0.01%, 4=0.01%, >=2000=0.01% 00:20:12.906 cpu : usr=0.46%, sys=2.05%, ctx=98136, majf=0, minf=5 00:20:12.906 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:20:12.906 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:12.906 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:12.906 issued rwts: total=48966,49152,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:12.906 latency : target=0, window=0, percentile=100.00%, depth=1 00:20:12.906 00:20:12.906 Run status group 0 (all jobs): 00:20:12.906 READ: bw=3264KiB/s (3343kB/s), 3264KiB/s-3264KiB/s (3343kB/s-3343kB/s), io=191MiB (201MB), run=60000-60000msec 00:20:12.906 WRITE: bw=3277KiB/s (3355kB/s), 3277KiB/s-3277KiB/s (3355kB/s-3355kB/s), io=192MiB (201MB), run=60000-60000msec 00:20:12.906 00:20:12.906 Disk stats (read/write): 00:20:12.906 nvme0n1: ios=48907/48987, merge=0/0, ticks=10075/8129, in_queue=18204, util=99.72% 00:20:12.906 15:00:43 -- target/initiator_timeout.sh@56 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:20:12.906 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:20:12.906 15:00:43 -- target/initiator_timeout.sh@57 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:20:12.906 15:00:43 -- common/autotest_common.sh@1208 -- # local i=0 00:20:12.906 15:00:43 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:20:12.906 15:00:43 -- common/autotest_common.sh@1209 -- # grep -q -w SPDKISFASTANDAWESOME 00:20:12.906 15:00:43 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:20:12.906 15:00:43 -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:20:12.906 15:00:43 -- common/autotest_common.sh@1220 -- # return 0 00:20:12.906 15:00:43 -- target/initiator_timeout.sh@59 -- # '[' 0 -eq 0 ']' 00:20:12.906 nvmf hotplug test: fio successful as expected 00:20:12.906 15:00:43 -- target/initiator_timeout.sh@60 -- # echo 'nvmf hotplug test: fio successful as expected' 00:20:12.906 15:00:43 -- target/initiator_timeout.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:20:12.906 15:00:43 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:12.906 15:00:43 -- common/autotest_common.sh@10 -- # set +x 00:20:12.906 15:00:43 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:12.906 15:00:43 -- target/initiator_timeout.sh@69 -- # rm -f ./local-job0-0-verify.state 00:20:12.906 15:00:43 -- target/initiator_timeout.sh@71 -- # trap - SIGINT SIGTERM EXIT 00:20:12.906 15:00:43 -- target/initiator_timeout.sh@73 -- # nvmftestfini 00:20:12.906 15:00:43 -- nvmf/common.sh@476 -- # nvmfcleanup 00:20:12.906 15:00:43 -- nvmf/common.sh@116 -- # sync 00:20:12.906 15:00:43 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:20:12.906 15:00:43 -- nvmf/common.sh@119 -- # set +e 00:20:12.906 15:00:43 -- nvmf/common.sh@120 -- # for i in {1..20} 00:20:12.906 15:00:43 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:20:12.906 rmmod nvme_tcp 00:20:12.906 rmmod nvme_fabrics 00:20:12.906 rmmod nvme_keyring 00:20:12.906 15:00:43 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:20:12.906 15:00:43 -- nvmf/common.sh@123 -- # set -e 00:20:12.906 15:00:43 -- nvmf/common.sh@124 -- # return 0 00:20:12.906 15:00:43 -- nvmf/common.sh@477 -- # '[' -n 91810 ']' 00:20:12.906 15:00:43 -- nvmf/common.sh@478 -- # killprocess 91810 00:20:12.906 15:00:43 -- common/autotest_common.sh@936 -- # '[' -z 91810 ']' 00:20:12.906 15:00:43 -- common/autotest_common.sh@940 -- # kill -0 91810 00:20:12.906 15:00:43 -- common/autotest_common.sh@941 -- # uname 00:20:12.906 15:00:43 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:20:12.906 15:00:43 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 91810 00:20:12.906 15:00:43 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:20:12.906 15:00:43 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:20:12.906 killing process with pid 91810 00:20:12.906 15:00:43 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 91810' 00:20:12.906 15:00:43 -- common/autotest_common.sh@955 -- # kill 91810 00:20:12.906 15:00:43 -- common/autotest_common.sh@960 -- # wait 91810 00:20:12.906 15:00:44 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:20:12.906 15:00:44 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:20:12.906 15:00:44 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:20:12.906 15:00:44 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:20:12.906 15:00:44 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:20:12.906 15:00:44 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:12.906 15:00:44 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:12.906 15:00:44 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:12.906 15:00:44 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:20:12.906 00:20:12.906 real 1m4.654s 00:20:12.906 user 4m7.190s 00:20:12.906 sys 0m7.935s 00:20:12.906 15:00:44 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:20:12.906 15:00:44 -- common/autotest_common.sh@10 -- # set +x 00:20:12.906 ************************************ 00:20:12.906 END TEST nvmf_initiator_timeout 00:20:12.906 ************************************ 00:20:12.906 15:00:44 -- nvmf/nvmf.sh@69 -- # [[ virt == phy ]] 00:20:12.906 15:00:44 -- nvmf/nvmf.sh@86 -- # timing_exit target 00:20:12.906 15:00:44 -- common/autotest_common.sh@728 -- # xtrace_disable 00:20:12.906 15:00:44 -- common/autotest_common.sh@10 -- # set +x 00:20:12.906 15:00:44 -- nvmf/nvmf.sh@88 -- # timing_enter host 00:20:12.906 15:00:44 -- common/autotest_common.sh@722 -- # xtrace_disable 00:20:12.906 15:00:44 -- common/autotest_common.sh@10 -- # set +x 00:20:12.906 15:00:44 -- nvmf/nvmf.sh@90 -- # [[ 0 -eq 0 ]] 00:20:12.906 15:00:44 -- nvmf/nvmf.sh@91 -- # run_test nvmf_multicontroller /home/vagrant/spdk_repo/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:20:12.906 15:00:44 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:20:12.906 15:00:44 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:20:12.906 15:00:44 -- common/autotest_common.sh@10 -- # set +x 00:20:12.906 ************************************ 00:20:12.906 START TEST nvmf_multicontroller 00:20:12.906 ************************************ 00:20:12.906 15:00:44 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:20:12.906 * Looking for test storage... 00:20:12.906 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:20:12.906 15:00:44 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:20:12.906 15:00:44 -- common/autotest_common.sh@1690 -- # lcov --version 00:20:12.906 15:00:44 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:20:12.906 15:00:44 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:20:12.906 15:00:44 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:20:12.906 15:00:44 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:20:12.906 15:00:44 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:20:12.906 15:00:44 -- scripts/common.sh@335 -- # IFS=.-: 00:20:12.906 15:00:44 -- scripts/common.sh@335 -- # read -ra ver1 00:20:12.906 15:00:44 -- scripts/common.sh@336 -- # IFS=.-: 00:20:12.906 15:00:44 -- scripts/common.sh@336 -- # read -ra ver2 00:20:12.906 15:00:44 -- scripts/common.sh@337 -- # local 'op=<' 00:20:12.906 15:00:44 -- scripts/common.sh@339 -- # ver1_l=2 00:20:12.906 15:00:44 -- scripts/common.sh@340 -- # ver2_l=1 00:20:12.906 15:00:44 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:20:12.906 15:00:44 -- scripts/common.sh@343 -- # case "$op" in 00:20:12.906 15:00:44 -- scripts/common.sh@344 -- # : 1 00:20:12.906 15:00:44 -- scripts/common.sh@363 -- # (( v = 0 )) 00:20:12.906 15:00:44 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:12.906 15:00:44 -- scripts/common.sh@364 -- # decimal 1 00:20:12.906 15:00:44 -- scripts/common.sh@352 -- # local d=1 00:20:12.906 15:00:44 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:12.906 15:00:44 -- scripts/common.sh@354 -- # echo 1 00:20:12.906 15:00:44 -- scripts/common.sh@364 -- # ver1[v]=1 00:20:12.906 15:00:44 -- scripts/common.sh@365 -- # decimal 2 00:20:12.906 15:00:44 -- scripts/common.sh@352 -- # local d=2 00:20:12.906 15:00:44 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:20:12.906 15:00:44 -- scripts/common.sh@354 -- # echo 2 00:20:12.906 15:00:44 -- scripts/common.sh@365 -- # ver2[v]=2 00:20:12.906 15:00:44 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:20:12.906 15:00:44 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:20:12.906 15:00:44 -- scripts/common.sh@367 -- # return 0 00:20:12.907 15:00:44 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:20:12.907 15:00:44 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:20:12.907 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:12.907 --rc genhtml_branch_coverage=1 00:20:12.907 --rc genhtml_function_coverage=1 00:20:12.907 --rc genhtml_legend=1 00:20:12.907 --rc geninfo_all_blocks=1 00:20:12.907 --rc geninfo_unexecuted_blocks=1 00:20:12.907 00:20:12.907 ' 00:20:12.907 15:00:44 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:20:12.907 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:12.907 --rc genhtml_branch_coverage=1 00:20:12.907 --rc genhtml_function_coverage=1 00:20:12.907 --rc genhtml_legend=1 00:20:12.907 --rc geninfo_all_blocks=1 00:20:12.907 --rc geninfo_unexecuted_blocks=1 00:20:12.907 00:20:12.907 ' 00:20:12.907 15:00:44 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:20:12.907 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:12.907 --rc genhtml_branch_coverage=1 00:20:12.907 --rc genhtml_function_coverage=1 00:20:12.907 --rc genhtml_legend=1 00:20:12.907 --rc geninfo_all_blocks=1 00:20:12.907 --rc geninfo_unexecuted_blocks=1 00:20:12.907 00:20:12.907 ' 00:20:12.907 15:00:44 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:20:12.907 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:12.907 --rc genhtml_branch_coverage=1 00:20:12.907 --rc genhtml_function_coverage=1 00:20:12.907 --rc genhtml_legend=1 00:20:12.907 --rc geninfo_all_blocks=1 00:20:12.907 --rc geninfo_unexecuted_blocks=1 00:20:12.907 00:20:12.907 ' 00:20:12.907 15:00:44 -- host/multicontroller.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:20:12.907 15:00:44 -- nvmf/common.sh@7 -- # uname -s 00:20:12.907 15:00:44 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:12.907 15:00:44 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:12.907 15:00:44 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:12.907 15:00:44 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:12.907 15:00:44 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:12.907 15:00:44 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:12.907 15:00:44 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:12.907 15:00:44 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:12.907 15:00:44 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:12.907 15:00:44 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:12.907 15:00:44 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:2d843004-a791-47f3-8dd7-3d04462c368b 00:20:12.907 15:00:44 -- nvmf/common.sh@18 -- # NVME_HOSTID=2d843004-a791-47f3-8dd7-3d04462c368b 00:20:12.907 15:00:44 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:12.907 15:00:44 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:12.907 15:00:44 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:20:12.907 15:00:44 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:20:12.907 15:00:44 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:12.907 15:00:44 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:12.907 15:00:44 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:12.907 15:00:44 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:12.907 15:00:44 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:12.907 15:00:44 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:12.907 15:00:44 -- paths/export.sh@5 -- # export PATH 00:20:12.907 15:00:44 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:12.907 15:00:44 -- nvmf/common.sh@46 -- # : 0 00:20:12.907 15:00:44 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:20:12.907 15:00:44 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:20:12.907 15:00:44 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:20:12.907 15:00:44 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:12.907 15:00:44 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:12.907 15:00:44 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:20:12.907 15:00:44 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:20:12.907 15:00:44 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:20:12.907 15:00:44 -- host/multicontroller.sh@11 -- # MALLOC_BDEV_SIZE=64 00:20:12.907 15:00:44 -- host/multicontroller.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:20:12.907 15:00:44 -- host/multicontroller.sh@13 -- # NVMF_HOST_FIRST_PORT=60000 00:20:12.907 15:00:44 -- host/multicontroller.sh@14 -- # NVMF_HOST_SECOND_PORT=60001 00:20:12.907 15:00:44 -- host/multicontroller.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:20:12.907 15:00:44 -- host/multicontroller.sh@18 -- # '[' tcp == rdma ']' 00:20:12.907 15:00:44 -- host/multicontroller.sh@23 -- # nvmftestinit 00:20:12.907 15:00:44 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:20:12.907 15:00:44 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:12.907 15:00:44 -- nvmf/common.sh@436 -- # prepare_net_devs 00:20:12.907 15:00:44 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:20:12.907 15:00:44 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:20:12.907 15:00:44 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:12.907 15:00:44 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:12.907 15:00:44 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:12.907 15:00:44 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:20:12.907 15:00:44 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:20:12.907 15:00:44 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:20:12.907 15:00:44 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:20:12.907 15:00:44 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:20:12.907 15:00:44 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:20:12.907 15:00:44 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:12.907 15:00:44 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:12.907 15:00:44 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:20:12.907 15:00:44 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:20:12.907 15:00:44 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:20:12.907 15:00:44 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:20:12.907 15:00:44 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:20:12.907 15:00:44 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:12.907 15:00:44 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:20:12.907 15:00:44 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:20:12.907 15:00:44 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:20:12.907 15:00:44 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:20:12.907 15:00:44 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:20:12.907 15:00:44 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:20:12.907 Cannot find device "nvmf_tgt_br" 00:20:12.907 15:00:44 -- nvmf/common.sh@154 -- # true 00:20:12.907 15:00:44 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:20:12.907 Cannot find device "nvmf_tgt_br2" 00:20:12.907 15:00:44 -- nvmf/common.sh@155 -- # true 00:20:12.907 15:00:44 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:20:12.907 15:00:44 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:20:12.907 Cannot find device "nvmf_tgt_br" 00:20:12.907 15:00:44 -- nvmf/common.sh@157 -- # true 00:20:12.907 15:00:44 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:20:12.907 Cannot find device "nvmf_tgt_br2" 00:20:12.907 15:00:44 -- nvmf/common.sh@158 -- # true 00:20:12.907 15:00:44 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:20:12.907 15:00:44 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:20:12.907 15:00:44 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:20:12.907 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:20:12.907 15:00:44 -- nvmf/common.sh@161 -- # true 00:20:12.907 15:00:44 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:20:12.907 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:20:12.907 15:00:44 -- nvmf/common.sh@162 -- # true 00:20:12.907 15:00:44 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:20:12.907 15:00:44 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:20:12.907 15:00:44 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:20:12.907 15:00:44 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:20:12.907 15:00:44 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:20:12.907 15:00:44 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:20:12.907 15:00:44 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:20:12.907 15:00:44 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:20:12.907 15:00:44 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:20:12.907 15:00:44 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:20:12.907 15:00:44 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:20:12.907 15:00:44 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:20:12.907 15:00:44 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:20:12.907 15:00:44 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:20:12.907 15:00:44 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:20:12.907 15:00:44 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:20:12.908 15:00:44 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:20:12.908 15:00:44 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:20:12.908 15:00:44 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:20:12.908 15:00:44 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:20:12.908 15:00:44 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:20:12.908 15:00:44 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:20:12.908 15:00:44 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:20:12.908 15:00:44 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:20:12.908 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:12.908 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.105 ms 00:20:12.908 00:20:12.908 --- 10.0.0.2 ping statistics --- 00:20:12.908 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:12.908 rtt min/avg/max/mdev = 0.105/0.105/0.105/0.000 ms 00:20:12.908 15:00:44 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:20:12.908 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:20:12.908 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.041 ms 00:20:12.908 00:20:12.908 --- 10.0.0.3 ping statistics --- 00:20:12.908 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:12.908 rtt min/avg/max/mdev = 0.041/0.041/0.041/0.000 ms 00:20:12.908 15:00:44 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:20:12.908 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:12.908 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.025 ms 00:20:12.908 00:20:12.908 --- 10.0.0.1 ping statistics --- 00:20:12.908 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:12.908 rtt min/avg/max/mdev = 0.025/0.025/0.025/0.000 ms 00:20:12.908 15:00:44 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:12.908 15:00:44 -- nvmf/common.sh@421 -- # return 0 00:20:12.908 15:00:44 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:20:12.908 15:00:44 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:12.908 15:00:44 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:20:12.908 15:00:44 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:20:12.908 15:00:44 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:12.908 15:00:44 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:20:12.908 15:00:44 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:20:12.908 15:00:44 -- host/multicontroller.sh@25 -- # nvmfappstart -m 0xE 00:20:12.908 15:00:44 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:20:12.908 15:00:44 -- common/autotest_common.sh@722 -- # xtrace_disable 00:20:12.908 15:00:44 -- common/autotest_common.sh@10 -- # set +x 00:20:12.908 15:00:44 -- nvmf/common.sh@469 -- # nvmfpid=92752 00:20:12.908 15:00:44 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:20:12.908 15:00:44 -- nvmf/common.sh@470 -- # waitforlisten 92752 00:20:12.908 15:00:44 -- common/autotest_common.sh@829 -- # '[' -z 92752 ']' 00:20:12.908 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:12.908 15:00:44 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:12.908 15:00:44 -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:12.908 15:00:44 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:12.908 15:00:44 -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:12.908 15:00:44 -- common/autotest_common.sh@10 -- # set +x 00:20:12.908 [2024-12-01 15:00:44.886812] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:20:12.908 [2024-12-01 15:00:44.886920] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:12.908 [2024-12-01 15:00:45.022292] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:20:12.908 [2024-12-01 15:00:45.098700] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:20:12.908 [2024-12-01 15:00:45.098907] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:12.908 [2024-12-01 15:00:45.098923] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:12.908 [2024-12-01 15:00:45.098933] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:12.908 [2024-12-01 15:00:45.099024] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:20:12.908 [2024-12-01 15:00:45.099185] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:20:12.908 [2024-12-01 15:00:45.099657] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:20:12.908 15:00:45 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:12.908 15:00:45 -- common/autotest_common.sh@862 -- # return 0 00:20:12.908 15:00:45 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:20:12.908 15:00:45 -- common/autotest_common.sh@728 -- # xtrace_disable 00:20:12.908 15:00:45 -- common/autotest_common.sh@10 -- # set +x 00:20:12.908 15:00:45 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:12.908 15:00:45 -- host/multicontroller.sh@27 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:20:12.908 15:00:45 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:12.908 15:00:45 -- common/autotest_common.sh@10 -- # set +x 00:20:12.908 [2024-12-01 15:00:45.948278] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:12.908 15:00:45 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:12.908 15:00:45 -- host/multicontroller.sh@29 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:20:12.908 15:00:45 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:12.908 15:00:45 -- common/autotest_common.sh@10 -- # set +x 00:20:12.908 Malloc0 00:20:12.908 15:00:45 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:12.908 15:00:45 -- host/multicontroller.sh@30 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:20:12.908 15:00:45 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:12.908 15:00:45 -- common/autotest_common.sh@10 -- # set +x 00:20:12.908 15:00:46 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:12.908 15:00:46 -- host/multicontroller.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:20:12.908 15:00:46 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:12.908 15:00:46 -- common/autotest_common.sh@10 -- # set +x 00:20:12.908 15:00:46 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:12.908 15:00:46 -- host/multicontroller.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:20:12.908 15:00:46 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:12.908 15:00:46 -- common/autotest_common.sh@10 -- # set +x 00:20:13.167 [2024-12-01 15:00:46.019086] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:13.167 15:00:46 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:13.167 15:00:46 -- host/multicontroller.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:20:13.167 15:00:46 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:13.167 15:00:46 -- common/autotest_common.sh@10 -- # set +x 00:20:13.167 [2024-12-01 15:00:46.026966] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:20:13.167 15:00:46 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:13.167 15:00:46 -- host/multicontroller.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:20:13.167 15:00:46 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:13.167 15:00:46 -- common/autotest_common.sh@10 -- # set +x 00:20:13.167 Malloc1 00:20:13.167 15:00:46 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:13.167 15:00:46 -- host/multicontroller.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:20:13.167 15:00:46 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:13.167 15:00:46 -- common/autotest_common.sh@10 -- # set +x 00:20:13.167 15:00:46 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:13.167 15:00:46 -- host/multicontroller.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc1 00:20:13.167 15:00:46 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:13.167 15:00:46 -- common/autotest_common.sh@10 -- # set +x 00:20:13.167 15:00:46 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:13.167 15:00:46 -- host/multicontroller.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:20:13.167 15:00:46 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:13.167 15:00:46 -- common/autotest_common.sh@10 -- # set +x 00:20:13.167 15:00:46 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:13.167 15:00:46 -- host/multicontroller.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4421 00:20:13.167 15:00:46 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:13.167 15:00:46 -- common/autotest_common.sh@10 -- # set +x 00:20:13.167 15:00:46 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:13.167 15:00:46 -- host/multicontroller.sh@44 -- # bdevperf_pid=92810 00:20:13.167 15:00:46 -- host/multicontroller.sh@46 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; pap "$testdir/try.txt"; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:20:13.167 15:00:46 -- host/multicontroller.sh@43 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w write -t 1 -f 00:20:13.167 15:00:46 -- host/multicontroller.sh@47 -- # waitforlisten 92810 /var/tmp/bdevperf.sock 00:20:13.167 15:00:46 -- common/autotest_common.sh@829 -- # '[' -z 92810 ']' 00:20:13.167 15:00:46 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:13.167 15:00:46 -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:13.167 15:00:46 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:13.167 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:13.167 15:00:46 -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:13.167 15:00:46 -- common/autotest_common.sh@10 -- # set +x 00:20:14.105 15:00:47 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:14.105 15:00:47 -- common/autotest_common.sh@862 -- # return 0 00:20:14.105 15:00:47 -- host/multicontroller.sh@50 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 00:20:14.105 15:00:47 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:14.105 15:00:47 -- common/autotest_common.sh@10 -- # set +x 00:20:14.105 NVMe0n1 00:20:14.105 15:00:47 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:14.105 15:00:47 -- host/multicontroller.sh@54 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:20:14.105 15:00:47 -- host/multicontroller.sh@54 -- # grep -c NVMe 00:20:14.105 15:00:47 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:14.105 15:00:47 -- common/autotest_common.sh@10 -- # set +x 00:20:14.105 15:00:47 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:14.105 1 00:20:14.105 15:00:47 -- host/multicontroller.sh@60 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:20:14.105 15:00:47 -- common/autotest_common.sh@650 -- # local es=0 00:20:14.105 15:00:47 -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:20:14.105 15:00:47 -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:20:14.364 15:00:47 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:20:14.364 15:00:47 -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:20:14.364 15:00:47 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:20:14.364 15:00:47 -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:20:14.364 15:00:47 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:14.364 15:00:47 -- common/autotest_common.sh@10 -- # set +x 00:20:14.364 2024/12/01 15:00:47 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 hostaddr:10.0.0.2 hostnqn:nqn.2021-09-7.io.spdk:00001 hostsvcid:60000 name:NVMe0 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-114 Msg=A controller named NVMe0 already exists with the specified network path 00:20:14.364 request: 00:20:14.364 { 00:20:14.364 "method": "bdev_nvme_attach_controller", 00:20:14.364 "params": { 00:20:14.364 "name": "NVMe0", 00:20:14.364 "trtype": "tcp", 00:20:14.364 "traddr": "10.0.0.2", 00:20:14.364 "hostnqn": "nqn.2021-09-7.io.spdk:00001", 00:20:14.364 "hostaddr": "10.0.0.2", 00:20:14.364 "hostsvcid": "60000", 00:20:14.364 "adrfam": "ipv4", 00:20:14.364 "trsvcid": "4420", 00:20:14.364 "subnqn": "nqn.2016-06.io.spdk:cnode1" 00:20:14.364 } 00:20:14.364 } 00:20:14.364 Got JSON-RPC error response 00:20:14.364 GoRPCClient: error on JSON-RPC call 00:20:14.364 15:00:47 -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:20:14.364 15:00:47 -- common/autotest_common.sh@653 -- # es=1 00:20:14.364 15:00:47 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:20:14.364 15:00:47 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:20:14.364 15:00:47 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:20:14.364 15:00:47 -- host/multicontroller.sh@65 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:20:14.364 15:00:47 -- common/autotest_common.sh@650 -- # local es=0 00:20:14.364 15:00:47 -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:20:14.364 15:00:47 -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:20:14.364 15:00:47 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:20:14.364 15:00:47 -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:20:14.364 15:00:47 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:20:14.364 15:00:47 -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:20:14.364 15:00:47 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:14.364 15:00:47 -- common/autotest_common.sh@10 -- # set +x 00:20:14.364 2024/12/01 15:00:47 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 hostaddr:10.0.0.2 hostsvcid:60000 name:NVMe0 subnqn:nqn.2016-06.io.spdk:cnode2 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-114 Msg=A controller named NVMe0 already exists with the specified network path 00:20:14.364 request: 00:20:14.364 { 00:20:14.364 "method": "bdev_nvme_attach_controller", 00:20:14.364 "params": { 00:20:14.364 "name": "NVMe0", 00:20:14.364 "trtype": "tcp", 00:20:14.364 "traddr": "10.0.0.2", 00:20:14.364 "hostaddr": "10.0.0.2", 00:20:14.364 "hostsvcid": "60000", 00:20:14.364 "adrfam": "ipv4", 00:20:14.364 "trsvcid": "4420", 00:20:14.364 "subnqn": "nqn.2016-06.io.spdk:cnode2" 00:20:14.364 } 00:20:14.364 } 00:20:14.364 Got JSON-RPC error response 00:20:14.364 GoRPCClient: error on JSON-RPC call 00:20:14.364 15:00:47 -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:20:14.364 15:00:47 -- common/autotest_common.sh@653 -- # es=1 00:20:14.364 15:00:47 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:20:14.364 15:00:47 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:20:14.364 15:00:47 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:20:14.364 15:00:47 -- host/multicontroller.sh@69 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:20:14.364 15:00:47 -- common/autotest_common.sh@650 -- # local es=0 00:20:14.364 15:00:47 -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:20:14.364 15:00:47 -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:20:14.365 15:00:47 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:20:14.365 15:00:47 -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:20:14.365 15:00:47 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:20:14.365 15:00:47 -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:20:14.365 15:00:47 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:14.365 15:00:47 -- common/autotest_common.sh@10 -- # set +x 00:20:14.365 2024/12/01 15:00:47 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 hostaddr:10.0.0.2 hostsvcid:60000 multipath:disable name:NVMe0 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-114 Msg=A controller named NVMe0 already exists and multipath is disabled 00:20:14.365 request: 00:20:14.365 { 00:20:14.365 "method": "bdev_nvme_attach_controller", 00:20:14.365 "params": { 00:20:14.365 "name": "NVMe0", 00:20:14.365 "trtype": "tcp", 00:20:14.365 "traddr": "10.0.0.2", 00:20:14.365 "hostaddr": "10.0.0.2", 00:20:14.365 "hostsvcid": "60000", 00:20:14.365 "adrfam": "ipv4", 00:20:14.365 "trsvcid": "4420", 00:20:14.365 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:14.365 "multipath": "disable" 00:20:14.365 } 00:20:14.365 } 00:20:14.365 Got JSON-RPC error response 00:20:14.365 GoRPCClient: error on JSON-RPC call 00:20:14.365 15:00:47 -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:20:14.365 15:00:47 -- common/autotest_common.sh@653 -- # es=1 00:20:14.365 15:00:47 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:20:14.365 15:00:47 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:20:14.365 15:00:47 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:20:14.365 15:00:47 -- host/multicontroller.sh@74 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:20:14.365 15:00:47 -- common/autotest_common.sh@650 -- # local es=0 00:20:14.365 15:00:47 -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:20:14.365 15:00:47 -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:20:14.365 15:00:47 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:20:14.365 15:00:47 -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:20:14.365 15:00:47 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:20:14.365 15:00:47 -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:20:14.365 15:00:47 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:14.365 15:00:47 -- common/autotest_common.sh@10 -- # set +x 00:20:14.365 2024/12/01 15:00:47 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 hostaddr:10.0.0.2 hostsvcid:60000 multipath:failover name:NVMe0 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-114 Msg=A controller named NVMe0 already exists with the specified network path 00:20:14.365 request: 00:20:14.365 { 00:20:14.365 "method": "bdev_nvme_attach_controller", 00:20:14.365 "params": { 00:20:14.365 "name": "NVMe0", 00:20:14.365 "trtype": "tcp", 00:20:14.365 "traddr": "10.0.0.2", 00:20:14.365 "hostaddr": "10.0.0.2", 00:20:14.365 "hostsvcid": "60000", 00:20:14.365 "adrfam": "ipv4", 00:20:14.365 "trsvcid": "4420", 00:20:14.365 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:14.365 "multipath": "failover" 00:20:14.365 } 00:20:14.365 } 00:20:14.365 Got JSON-RPC error response 00:20:14.365 GoRPCClient: error on JSON-RPC call 00:20:14.365 15:00:47 -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:20:14.365 15:00:47 -- common/autotest_common.sh@653 -- # es=1 00:20:14.365 15:00:47 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:20:14.365 15:00:47 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:20:14.365 15:00:47 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:20:14.365 15:00:47 -- host/multicontroller.sh@79 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:20:14.365 15:00:47 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:14.365 15:00:47 -- common/autotest_common.sh@10 -- # set +x 00:20:14.365 00:20:14.365 15:00:47 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:14.365 15:00:47 -- host/multicontroller.sh@83 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:20:14.365 15:00:47 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:14.365 15:00:47 -- common/autotest_common.sh@10 -- # set +x 00:20:14.365 15:00:47 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:14.365 15:00:47 -- host/multicontroller.sh@87 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe1 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 00:20:14.365 15:00:47 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:14.365 15:00:47 -- common/autotest_common.sh@10 -- # set +x 00:20:14.365 00:20:14.365 15:00:47 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:14.365 15:00:47 -- host/multicontroller.sh@90 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:20:14.365 15:00:47 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:14.365 15:00:47 -- host/multicontroller.sh@90 -- # grep -c NVMe 00:20:14.365 15:00:47 -- common/autotest_common.sh@10 -- # set +x 00:20:14.365 15:00:47 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:14.365 15:00:47 -- host/multicontroller.sh@90 -- # '[' 2 '!=' 2 ']' 00:20:14.365 15:00:47 -- host/multicontroller.sh@95 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:20:15.743 0 00:20:15.743 15:00:48 -- host/multicontroller.sh@98 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe1 00:20:15.743 15:00:48 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:15.743 15:00:48 -- common/autotest_common.sh@10 -- # set +x 00:20:15.743 15:00:48 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:15.743 15:00:48 -- host/multicontroller.sh@100 -- # killprocess 92810 00:20:15.743 15:00:48 -- common/autotest_common.sh@936 -- # '[' -z 92810 ']' 00:20:15.743 15:00:48 -- common/autotest_common.sh@940 -- # kill -0 92810 00:20:15.743 15:00:48 -- common/autotest_common.sh@941 -- # uname 00:20:15.743 15:00:48 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:20:15.743 15:00:48 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 92810 00:20:15.743 15:00:48 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:20:15.743 15:00:48 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:20:15.743 killing process with pid 92810 00:20:15.743 15:00:48 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 92810' 00:20:15.743 15:00:48 -- common/autotest_common.sh@955 -- # kill 92810 00:20:15.743 15:00:48 -- common/autotest_common.sh@960 -- # wait 92810 00:20:15.743 15:00:48 -- host/multicontroller.sh@102 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:20:15.743 15:00:48 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:15.743 15:00:48 -- common/autotest_common.sh@10 -- # set +x 00:20:15.743 15:00:48 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:15.743 15:00:48 -- host/multicontroller.sh@103 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:20:15.743 15:00:48 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:15.743 15:00:48 -- common/autotest_common.sh@10 -- # set +x 00:20:15.743 15:00:48 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:15.743 15:00:48 -- host/multicontroller.sh@105 -- # trap - SIGINT SIGTERM EXIT 00:20:15.743 15:00:48 -- host/multicontroller.sh@107 -- # pap /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:20:15.743 15:00:48 -- common/autotest_common.sh@1607 -- # read -r file 00:20:15.743 15:00:48 -- common/autotest_common.sh@1606 -- # find /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt -type f 00:20:15.743 15:00:48 -- common/autotest_common.sh@1606 -- # sort -u 00:20:16.002 15:00:48 -- common/autotest_common.sh@1608 -- # cat 00:20:16.002 --- /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt --- 00:20:16.002 [2024-12-01 15:00:46.141391] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:20:16.002 [2024-12-01 15:00:46.141515] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid92810 ] 00:20:16.002 [2024-12-01 15:00:46.275563] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:16.002 [2024-12-01 15:00:46.350026] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:20:16.003 [2024-12-01 15:00:47.423271] bdev.c:4553:bdev_name_add: *ERROR*: Bdev name 9c764a55-eeb4-4b37-af52-732e7b267a25 already exists 00:20:16.003 [2024-12-01 15:00:47.423314] bdev.c:7603:bdev_register: *ERROR*: Unable to add uuid:9c764a55-eeb4-4b37-af52-732e7b267a25 alias for bdev NVMe1n1 00:20:16.003 [2024-12-01 15:00:47.423349] bdev_nvme.c:4236:nvme_bdev_create: *ERROR*: spdk_bdev_register() failed 00:20:16.003 Running I/O for 1 seconds... 00:20:16.003 00:20:16.003 Latency(us) 00:20:16.003 [2024-12-01T15:00:49.118Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:16.003 [2024-12-01T15:00:49.118Z] Job: NVMe0n1 (Core Mask 0x1, workload: write, depth: 128, IO size: 4096) 00:20:16.003 NVMe0n1 : 1.01 23999.59 93.75 0.00 0.00 5321.74 1720.32 9770.82 00:20:16.003 [2024-12-01T15:00:49.118Z] =================================================================================================================== 00:20:16.003 [2024-12-01T15:00:49.118Z] Total : 23999.59 93.75 0.00 0.00 5321.74 1720.32 9770.82 00:20:16.003 Received shutdown signal, test time was about 1.000000 seconds 00:20:16.003 00:20:16.003 Latency(us) 00:20:16.003 [2024-12-01T15:00:49.118Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:16.003 [2024-12-01T15:00:49.118Z] =================================================================================================================== 00:20:16.003 [2024-12-01T15:00:49.118Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:16.003 --- /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt --- 00:20:16.003 15:00:48 -- common/autotest_common.sh@1613 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:20:16.003 15:00:48 -- common/autotest_common.sh@1607 -- # read -r file 00:20:16.003 15:00:48 -- host/multicontroller.sh@108 -- # nvmftestfini 00:20:16.003 15:00:48 -- nvmf/common.sh@476 -- # nvmfcleanup 00:20:16.003 15:00:48 -- nvmf/common.sh@116 -- # sync 00:20:16.003 15:00:48 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:20:16.003 15:00:48 -- nvmf/common.sh@119 -- # set +e 00:20:16.003 15:00:48 -- nvmf/common.sh@120 -- # for i in {1..20} 00:20:16.003 15:00:48 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:20:16.003 rmmod nvme_tcp 00:20:16.003 rmmod nvme_fabrics 00:20:16.003 rmmod nvme_keyring 00:20:16.003 15:00:48 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:20:16.003 15:00:49 -- nvmf/common.sh@123 -- # set -e 00:20:16.003 15:00:49 -- nvmf/common.sh@124 -- # return 0 00:20:16.003 15:00:49 -- nvmf/common.sh@477 -- # '[' -n 92752 ']' 00:20:16.003 15:00:49 -- nvmf/common.sh@478 -- # killprocess 92752 00:20:16.003 15:00:49 -- common/autotest_common.sh@936 -- # '[' -z 92752 ']' 00:20:16.003 15:00:49 -- common/autotest_common.sh@940 -- # kill -0 92752 00:20:16.003 15:00:49 -- common/autotest_common.sh@941 -- # uname 00:20:16.003 15:00:49 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:20:16.003 15:00:49 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 92752 00:20:16.003 15:00:49 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:20:16.003 15:00:49 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:20:16.003 killing process with pid 92752 00:20:16.003 15:00:49 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 92752' 00:20:16.003 15:00:49 -- common/autotest_common.sh@955 -- # kill 92752 00:20:16.003 15:00:49 -- common/autotest_common.sh@960 -- # wait 92752 00:20:16.262 15:00:49 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:20:16.262 15:00:49 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:20:16.262 15:00:49 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:20:16.262 15:00:49 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:20:16.262 15:00:49 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:20:16.262 15:00:49 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:16.262 15:00:49 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:16.262 15:00:49 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:16.521 15:00:49 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:20:16.521 00:20:16.521 real 0m5.114s 00:20:16.521 user 0m15.833s 00:20:16.521 sys 0m1.237s 00:20:16.521 15:00:49 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:20:16.521 15:00:49 -- common/autotest_common.sh@10 -- # set +x 00:20:16.521 ************************************ 00:20:16.521 END TEST nvmf_multicontroller 00:20:16.521 ************************************ 00:20:16.521 15:00:49 -- nvmf/nvmf.sh@92 -- # run_test nvmf_aer /home/vagrant/spdk_repo/spdk/test/nvmf/host/aer.sh --transport=tcp 00:20:16.521 15:00:49 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:20:16.521 15:00:49 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:20:16.521 15:00:49 -- common/autotest_common.sh@10 -- # set +x 00:20:16.521 ************************************ 00:20:16.521 START TEST nvmf_aer 00:20:16.521 ************************************ 00:20:16.521 15:00:49 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/aer.sh --transport=tcp 00:20:16.521 * Looking for test storage... 00:20:16.521 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:20:16.521 15:00:49 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:20:16.521 15:00:49 -- common/autotest_common.sh@1690 -- # lcov --version 00:20:16.521 15:00:49 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:20:16.521 15:00:49 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:20:16.521 15:00:49 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:20:16.521 15:00:49 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:20:16.521 15:00:49 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:20:16.521 15:00:49 -- scripts/common.sh@335 -- # IFS=.-: 00:20:16.521 15:00:49 -- scripts/common.sh@335 -- # read -ra ver1 00:20:16.521 15:00:49 -- scripts/common.sh@336 -- # IFS=.-: 00:20:16.521 15:00:49 -- scripts/common.sh@336 -- # read -ra ver2 00:20:16.521 15:00:49 -- scripts/common.sh@337 -- # local 'op=<' 00:20:16.521 15:00:49 -- scripts/common.sh@339 -- # ver1_l=2 00:20:16.521 15:00:49 -- scripts/common.sh@340 -- # ver2_l=1 00:20:16.521 15:00:49 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:20:16.521 15:00:49 -- scripts/common.sh@343 -- # case "$op" in 00:20:16.521 15:00:49 -- scripts/common.sh@344 -- # : 1 00:20:16.521 15:00:49 -- scripts/common.sh@363 -- # (( v = 0 )) 00:20:16.521 15:00:49 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:16.521 15:00:49 -- scripts/common.sh@364 -- # decimal 1 00:20:16.521 15:00:49 -- scripts/common.sh@352 -- # local d=1 00:20:16.521 15:00:49 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:16.521 15:00:49 -- scripts/common.sh@354 -- # echo 1 00:20:16.521 15:00:49 -- scripts/common.sh@364 -- # ver1[v]=1 00:20:16.521 15:00:49 -- scripts/common.sh@365 -- # decimal 2 00:20:16.521 15:00:49 -- scripts/common.sh@352 -- # local d=2 00:20:16.521 15:00:49 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:20:16.521 15:00:49 -- scripts/common.sh@354 -- # echo 2 00:20:16.781 15:00:49 -- scripts/common.sh@365 -- # ver2[v]=2 00:20:16.781 15:00:49 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:20:16.781 15:00:49 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:20:16.781 15:00:49 -- scripts/common.sh@367 -- # return 0 00:20:16.781 15:00:49 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:20:16.781 15:00:49 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:20:16.781 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:16.781 --rc genhtml_branch_coverage=1 00:20:16.781 --rc genhtml_function_coverage=1 00:20:16.781 --rc genhtml_legend=1 00:20:16.781 --rc geninfo_all_blocks=1 00:20:16.781 --rc geninfo_unexecuted_blocks=1 00:20:16.781 00:20:16.781 ' 00:20:16.781 15:00:49 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:20:16.781 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:16.781 --rc genhtml_branch_coverage=1 00:20:16.781 --rc genhtml_function_coverage=1 00:20:16.781 --rc genhtml_legend=1 00:20:16.781 --rc geninfo_all_blocks=1 00:20:16.781 --rc geninfo_unexecuted_blocks=1 00:20:16.781 00:20:16.781 ' 00:20:16.781 15:00:49 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:20:16.781 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:16.781 --rc genhtml_branch_coverage=1 00:20:16.781 --rc genhtml_function_coverage=1 00:20:16.781 --rc genhtml_legend=1 00:20:16.781 --rc geninfo_all_blocks=1 00:20:16.781 --rc geninfo_unexecuted_blocks=1 00:20:16.781 00:20:16.781 ' 00:20:16.781 15:00:49 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:20:16.781 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:16.781 --rc genhtml_branch_coverage=1 00:20:16.781 --rc genhtml_function_coverage=1 00:20:16.781 --rc genhtml_legend=1 00:20:16.781 --rc geninfo_all_blocks=1 00:20:16.781 --rc geninfo_unexecuted_blocks=1 00:20:16.781 00:20:16.781 ' 00:20:16.781 15:00:49 -- host/aer.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:20:16.781 15:00:49 -- nvmf/common.sh@7 -- # uname -s 00:20:16.781 15:00:49 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:16.781 15:00:49 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:16.781 15:00:49 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:16.781 15:00:49 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:16.781 15:00:49 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:16.781 15:00:49 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:16.781 15:00:49 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:16.781 15:00:49 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:16.781 15:00:49 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:16.781 15:00:49 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:16.781 15:00:49 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:2d843004-a791-47f3-8dd7-3d04462c368b 00:20:16.781 15:00:49 -- nvmf/common.sh@18 -- # NVME_HOSTID=2d843004-a791-47f3-8dd7-3d04462c368b 00:20:16.781 15:00:49 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:16.781 15:00:49 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:16.781 15:00:49 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:20:16.781 15:00:49 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:20:16.781 15:00:49 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:16.781 15:00:49 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:16.781 15:00:49 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:16.781 15:00:49 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:16.781 15:00:49 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:16.781 15:00:49 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:16.781 15:00:49 -- paths/export.sh@5 -- # export PATH 00:20:16.781 15:00:49 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:16.781 15:00:49 -- nvmf/common.sh@46 -- # : 0 00:20:16.781 15:00:49 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:20:16.781 15:00:49 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:20:16.781 15:00:49 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:20:16.781 15:00:49 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:16.781 15:00:49 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:16.781 15:00:49 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:20:16.781 15:00:49 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:20:16.781 15:00:49 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:20:16.781 15:00:49 -- host/aer.sh@11 -- # nvmftestinit 00:20:16.781 15:00:49 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:20:16.781 15:00:49 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:16.781 15:00:49 -- nvmf/common.sh@436 -- # prepare_net_devs 00:20:16.781 15:00:49 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:20:16.782 15:00:49 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:20:16.782 15:00:49 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:16.782 15:00:49 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:16.782 15:00:49 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:16.782 15:00:49 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:20:16.782 15:00:49 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:20:16.782 15:00:49 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:20:16.782 15:00:49 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:20:16.782 15:00:49 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:20:16.782 15:00:49 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:20:16.782 15:00:49 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:16.782 15:00:49 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:16.782 15:00:49 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:20:16.782 15:00:49 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:20:16.782 15:00:49 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:20:16.782 15:00:49 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:20:16.782 15:00:49 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:20:16.782 15:00:49 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:16.782 15:00:49 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:20:16.782 15:00:49 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:20:16.782 15:00:49 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:20:16.782 15:00:49 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:20:16.782 15:00:49 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:20:16.782 15:00:49 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:20:16.782 Cannot find device "nvmf_tgt_br" 00:20:16.782 15:00:49 -- nvmf/common.sh@154 -- # true 00:20:16.782 15:00:49 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:20:16.782 Cannot find device "nvmf_tgt_br2" 00:20:16.782 15:00:49 -- nvmf/common.sh@155 -- # true 00:20:16.782 15:00:49 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:20:16.782 15:00:49 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:20:16.782 Cannot find device "nvmf_tgt_br" 00:20:16.782 15:00:49 -- nvmf/common.sh@157 -- # true 00:20:16.782 15:00:49 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:20:16.782 Cannot find device "nvmf_tgt_br2" 00:20:16.782 15:00:49 -- nvmf/common.sh@158 -- # true 00:20:16.782 15:00:49 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:20:16.782 15:00:49 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:20:16.782 15:00:49 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:20:16.782 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:20:16.782 15:00:49 -- nvmf/common.sh@161 -- # true 00:20:16.782 15:00:49 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:20:16.782 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:20:16.782 15:00:49 -- nvmf/common.sh@162 -- # true 00:20:16.782 15:00:49 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:20:16.782 15:00:49 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:20:16.782 15:00:49 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:20:16.782 15:00:49 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:20:16.782 15:00:49 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:20:16.782 15:00:49 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:20:16.782 15:00:49 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:20:16.782 15:00:49 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:20:16.782 15:00:49 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:20:16.782 15:00:49 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:20:16.782 15:00:49 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:20:16.782 15:00:49 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:20:16.782 15:00:49 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:20:16.782 15:00:49 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:20:16.782 15:00:49 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:20:16.782 15:00:49 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:20:16.782 15:00:49 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:20:16.782 15:00:49 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:20:17.041 15:00:49 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:20:17.041 15:00:49 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:20:17.041 15:00:49 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:20:17.041 15:00:49 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:20:17.041 15:00:49 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:20:17.041 15:00:49 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:20:17.041 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:17.041 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.078 ms 00:20:17.041 00:20:17.041 --- 10.0.0.2 ping statistics --- 00:20:17.041 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:17.041 rtt min/avg/max/mdev = 0.078/0.078/0.078/0.000 ms 00:20:17.041 15:00:49 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:20:17.041 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:20:17.041 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.044 ms 00:20:17.041 00:20:17.041 --- 10.0.0.3 ping statistics --- 00:20:17.041 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:17.041 rtt min/avg/max/mdev = 0.044/0.044/0.044/0.000 ms 00:20:17.041 15:00:49 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:20:17.041 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:17.041 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.023 ms 00:20:17.041 00:20:17.041 --- 10.0.0.1 ping statistics --- 00:20:17.041 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:17.041 rtt min/avg/max/mdev = 0.023/0.023/0.023/0.000 ms 00:20:17.041 15:00:49 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:17.041 15:00:49 -- nvmf/common.sh@421 -- # return 0 00:20:17.041 15:00:49 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:20:17.041 15:00:49 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:17.041 15:00:49 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:20:17.041 15:00:49 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:20:17.041 15:00:49 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:17.041 15:00:49 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:20:17.041 15:00:49 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:20:17.041 15:00:49 -- host/aer.sh@12 -- # nvmfappstart -m 0xF 00:20:17.041 15:00:49 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:20:17.041 15:00:49 -- common/autotest_common.sh@722 -- # xtrace_disable 00:20:17.041 15:00:49 -- common/autotest_common.sh@10 -- # set +x 00:20:17.041 15:00:49 -- nvmf/common.sh@469 -- # nvmfpid=93060 00:20:17.041 15:00:49 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:20:17.041 15:00:49 -- nvmf/common.sh@470 -- # waitforlisten 93060 00:20:17.041 15:00:49 -- common/autotest_common.sh@829 -- # '[' -z 93060 ']' 00:20:17.041 15:00:49 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:17.041 15:00:49 -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:17.041 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:17.041 15:00:49 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:17.041 15:00:49 -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:17.041 15:00:49 -- common/autotest_common.sh@10 -- # set +x 00:20:17.041 [2024-12-01 15:00:50.040858] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:20:17.041 [2024-12-01 15:00:50.040973] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:17.300 [2024-12-01 15:00:50.184409] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:20:17.300 [2024-12-01 15:00:50.255952] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:20:17.300 [2024-12-01 15:00:50.256128] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:17.300 [2024-12-01 15:00:50.256147] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:17.300 [2024-12-01 15:00:50.256158] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:17.300 [2024-12-01 15:00:50.256323] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:20:17.300 [2024-12-01 15:00:50.256517] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:20:17.300 [2024-12-01 15:00:50.256874] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:20:17.300 [2024-12-01 15:00:50.256894] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:20:18.236 15:00:51 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:18.236 15:00:51 -- common/autotest_common.sh@862 -- # return 0 00:20:18.236 15:00:51 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:20:18.236 15:00:51 -- common/autotest_common.sh@728 -- # xtrace_disable 00:20:18.236 15:00:51 -- common/autotest_common.sh@10 -- # set +x 00:20:18.236 15:00:51 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:18.236 15:00:51 -- host/aer.sh@14 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:20:18.236 15:00:51 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:18.236 15:00:51 -- common/autotest_common.sh@10 -- # set +x 00:20:18.236 [2024-12-01 15:00:51.098260] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:18.236 15:00:51 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:18.236 15:00:51 -- host/aer.sh@16 -- # rpc_cmd bdev_malloc_create 64 512 --name Malloc0 00:20:18.236 15:00:51 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:18.236 15:00:51 -- common/autotest_common.sh@10 -- # set +x 00:20:18.236 Malloc0 00:20:18.236 15:00:51 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:18.236 15:00:51 -- host/aer.sh@17 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 2 00:20:18.236 15:00:51 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:18.236 15:00:51 -- common/autotest_common.sh@10 -- # set +x 00:20:18.236 15:00:51 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:18.236 15:00:51 -- host/aer.sh@18 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:20:18.236 15:00:51 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:18.236 15:00:51 -- common/autotest_common.sh@10 -- # set +x 00:20:18.236 15:00:51 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:18.236 15:00:51 -- host/aer.sh@19 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:20:18.236 15:00:51 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:18.236 15:00:51 -- common/autotest_common.sh@10 -- # set +x 00:20:18.236 [2024-12-01 15:00:51.164478] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:18.236 15:00:51 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:18.236 15:00:51 -- host/aer.sh@21 -- # rpc_cmd nvmf_get_subsystems 00:20:18.236 15:00:51 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:18.236 15:00:51 -- common/autotest_common.sh@10 -- # set +x 00:20:18.236 [2024-12-01 15:00:51.172213] nvmf_rpc.c: 275:rpc_nvmf_get_subsystems: *WARNING*: rpc_nvmf_get_subsystems: deprecated feature listener.transport is deprecated in favor of trtype to be removed in v24.05 00:20:18.236 [ 00:20:18.236 { 00:20:18.236 "allow_any_host": true, 00:20:18.236 "hosts": [], 00:20:18.236 "listen_addresses": [], 00:20:18.236 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:20:18.236 "subtype": "Discovery" 00:20:18.236 }, 00:20:18.236 { 00:20:18.236 "allow_any_host": true, 00:20:18.236 "hosts": [], 00:20:18.236 "listen_addresses": [ 00:20:18.236 { 00:20:18.236 "adrfam": "IPv4", 00:20:18.236 "traddr": "10.0.0.2", 00:20:18.236 "transport": "TCP", 00:20:18.236 "trsvcid": "4420", 00:20:18.236 "trtype": "TCP" 00:20:18.236 } 00:20:18.236 ], 00:20:18.236 "max_cntlid": 65519, 00:20:18.236 "max_namespaces": 2, 00:20:18.236 "min_cntlid": 1, 00:20:18.236 "model_number": "SPDK bdev Controller", 00:20:18.236 "namespaces": [ 00:20:18.236 { 00:20:18.236 "bdev_name": "Malloc0", 00:20:18.236 "name": "Malloc0", 00:20:18.236 "nguid": "461A14F24E964EDE81DE5CBA6077FB28", 00:20:18.236 "nsid": 1, 00:20:18.236 "uuid": "461a14f2-4e96-4ede-81de-5cba6077fb28" 00:20:18.236 } 00:20:18.236 ], 00:20:18.236 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:18.236 "serial_number": "SPDK00000000000001", 00:20:18.236 "subtype": "NVMe" 00:20:18.236 } 00:20:18.236 ] 00:20:18.236 15:00:51 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:18.236 15:00:51 -- host/aer.sh@23 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:20:18.236 15:00:51 -- host/aer.sh@24 -- # rm -f /tmp/aer_touch_file 00:20:18.236 15:00:51 -- host/aer.sh@33 -- # aerpid=93122 00:20:18.236 15:00:51 -- host/aer.sh@27 -- # /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -n 2 -t /tmp/aer_touch_file 00:20:18.236 15:00:51 -- host/aer.sh@36 -- # waitforfile /tmp/aer_touch_file 00:20:18.237 15:00:51 -- common/autotest_common.sh@1254 -- # local i=0 00:20:18.237 15:00:51 -- common/autotest_common.sh@1255 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:20:18.237 15:00:51 -- common/autotest_common.sh@1256 -- # '[' 0 -lt 200 ']' 00:20:18.237 15:00:51 -- common/autotest_common.sh@1257 -- # i=1 00:20:18.237 15:00:51 -- common/autotest_common.sh@1258 -- # sleep 0.1 00:20:18.237 15:00:51 -- common/autotest_common.sh@1255 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:20:18.237 15:00:51 -- common/autotest_common.sh@1256 -- # '[' 1 -lt 200 ']' 00:20:18.237 15:00:51 -- common/autotest_common.sh@1257 -- # i=2 00:20:18.237 15:00:51 -- common/autotest_common.sh@1258 -- # sleep 0.1 00:20:18.496 15:00:51 -- common/autotest_common.sh@1255 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:20:18.496 15:00:51 -- common/autotest_common.sh@1261 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:20:18.496 15:00:51 -- common/autotest_common.sh@1265 -- # return 0 00:20:18.496 15:00:51 -- host/aer.sh@39 -- # rpc_cmd bdev_malloc_create 64 4096 --name Malloc1 00:20:18.496 15:00:51 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:18.496 15:00:51 -- common/autotest_common.sh@10 -- # set +x 00:20:18.496 Malloc1 00:20:18.496 15:00:51 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:18.496 15:00:51 -- host/aer.sh@40 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 2 00:20:18.496 15:00:51 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:18.496 15:00:51 -- common/autotest_common.sh@10 -- # set +x 00:20:18.496 15:00:51 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:18.496 15:00:51 -- host/aer.sh@41 -- # rpc_cmd nvmf_get_subsystems 00:20:18.496 15:00:51 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:18.496 15:00:51 -- common/autotest_common.sh@10 -- # set +x 00:20:18.496 Asynchronous Event Request test 00:20:18.496 Attaching to 10.0.0.2 00:20:18.496 Attached to 10.0.0.2 00:20:18.496 Registering asynchronous event callbacks... 00:20:18.496 Starting namespace attribute notice tests for all controllers... 00:20:18.496 10.0.0.2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:20:18.496 aer_cb - Changed Namespace 00:20:18.496 Cleaning up... 00:20:18.496 [ 00:20:18.496 { 00:20:18.496 "allow_any_host": true, 00:20:18.496 "hosts": [], 00:20:18.496 "listen_addresses": [], 00:20:18.496 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:20:18.496 "subtype": "Discovery" 00:20:18.496 }, 00:20:18.496 { 00:20:18.496 "allow_any_host": true, 00:20:18.496 "hosts": [], 00:20:18.496 "listen_addresses": [ 00:20:18.496 { 00:20:18.496 "adrfam": "IPv4", 00:20:18.496 "traddr": "10.0.0.2", 00:20:18.496 "transport": "TCP", 00:20:18.496 "trsvcid": "4420", 00:20:18.496 "trtype": "TCP" 00:20:18.496 } 00:20:18.496 ], 00:20:18.496 "max_cntlid": 65519, 00:20:18.496 "max_namespaces": 2, 00:20:18.497 "min_cntlid": 1, 00:20:18.497 "model_number": "SPDK bdev Controller", 00:20:18.497 "namespaces": [ 00:20:18.497 { 00:20:18.497 "bdev_name": "Malloc0", 00:20:18.497 "name": "Malloc0", 00:20:18.497 "nguid": "461A14F24E964EDE81DE5CBA6077FB28", 00:20:18.497 "nsid": 1, 00:20:18.497 "uuid": "461a14f2-4e96-4ede-81de-5cba6077fb28" 00:20:18.497 }, 00:20:18.497 { 00:20:18.497 "bdev_name": "Malloc1", 00:20:18.497 "name": "Malloc1", 00:20:18.497 "nguid": "3F27931A5A6146FA9E0C647919B093C3", 00:20:18.497 "nsid": 2, 00:20:18.497 "uuid": "3f27931a-5a61-46fa-9e0c-647919b093c3" 00:20:18.497 } 00:20:18.497 ], 00:20:18.497 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:18.497 "serial_number": "SPDK00000000000001", 00:20:18.497 "subtype": "NVMe" 00:20:18.497 } 00:20:18.497 ] 00:20:18.497 15:00:51 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:18.497 15:00:51 -- host/aer.sh@43 -- # wait 93122 00:20:18.497 15:00:51 -- host/aer.sh@45 -- # rpc_cmd bdev_malloc_delete Malloc0 00:20:18.497 15:00:51 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:18.497 15:00:51 -- common/autotest_common.sh@10 -- # set +x 00:20:18.497 15:00:51 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:18.497 15:00:51 -- host/aer.sh@46 -- # rpc_cmd bdev_malloc_delete Malloc1 00:20:18.497 15:00:51 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:18.497 15:00:51 -- common/autotest_common.sh@10 -- # set +x 00:20:18.497 15:00:51 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:18.497 15:00:51 -- host/aer.sh@47 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:20:18.497 15:00:51 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:18.497 15:00:51 -- common/autotest_common.sh@10 -- # set +x 00:20:18.497 15:00:51 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:18.497 15:00:51 -- host/aer.sh@49 -- # trap - SIGINT SIGTERM EXIT 00:20:18.497 15:00:51 -- host/aer.sh@51 -- # nvmftestfini 00:20:18.497 15:00:51 -- nvmf/common.sh@476 -- # nvmfcleanup 00:20:18.497 15:00:51 -- nvmf/common.sh@116 -- # sync 00:20:18.757 15:00:51 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:20:18.757 15:00:51 -- nvmf/common.sh@119 -- # set +e 00:20:18.757 15:00:51 -- nvmf/common.sh@120 -- # for i in {1..20} 00:20:18.757 15:00:51 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:20:18.757 rmmod nvme_tcp 00:20:18.757 rmmod nvme_fabrics 00:20:18.757 rmmod nvme_keyring 00:20:18.757 15:00:51 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:20:18.757 15:00:51 -- nvmf/common.sh@123 -- # set -e 00:20:18.757 15:00:51 -- nvmf/common.sh@124 -- # return 0 00:20:18.757 15:00:51 -- nvmf/common.sh@477 -- # '[' -n 93060 ']' 00:20:18.757 15:00:51 -- nvmf/common.sh@478 -- # killprocess 93060 00:20:18.757 15:00:51 -- common/autotest_common.sh@936 -- # '[' -z 93060 ']' 00:20:18.757 15:00:51 -- common/autotest_common.sh@940 -- # kill -0 93060 00:20:18.757 15:00:51 -- common/autotest_common.sh@941 -- # uname 00:20:18.757 15:00:51 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:20:18.757 15:00:51 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 93060 00:20:18.757 killing process with pid 93060 00:20:18.757 15:00:51 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:20:18.757 15:00:51 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:20:18.757 15:00:51 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 93060' 00:20:18.757 15:00:51 -- common/autotest_common.sh@955 -- # kill 93060 00:20:18.757 [2024-12-01 15:00:51.718676] app.c: 883:log_deprecation_hits: *WARNING*: rpc_nvmf_get_subsystems: deprecation 'listener.transport is deprecated in favor of trtype' scheduled for removal in v24.05 hit 1 times 00:20:18.757 15:00:51 -- common/autotest_common.sh@960 -- # wait 93060 00:20:19.016 15:00:51 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:20:19.016 15:00:51 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:20:19.016 15:00:51 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:20:19.016 15:00:51 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:20:19.016 15:00:51 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:20:19.016 15:00:51 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:19.016 15:00:51 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:19.016 15:00:51 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:19.016 15:00:51 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:20:19.016 00:20:19.016 real 0m2.501s 00:20:19.016 user 0m6.964s 00:20:19.016 sys 0m0.702s 00:20:19.016 15:00:51 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:20:19.016 ************************************ 00:20:19.016 END TEST nvmf_aer 00:20:19.016 ************************************ 00:20:19.016 15:00:51 -- common/autotest_common.sh@10 -- # set +x 00:20:19.016 15:00:52 -- nvmf/nvmf.sh@93 -- # run_test nvmf_async_init /home/vagrant/spdk_repo/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:20:19.016 15:00:52 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:20:19.016 15:00:52 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:20:19.016 15:00:52 -- common/autotest_common.sh@10 -- # set +x 00:20:19.016 ************************************ 00:20:19.016 START TEST nvmf_async_init 00:20:19.016 ************************************ 00:20:19.016 15:00:52 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:20:19.016 * Looking for test storage... 00:20:19.016 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:20:19.016 15:00:52 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:20:19.016 15:00:52 -- common/autotest_common.sh@1690 -- # lcov --version 00:20:19.016 15:00:52 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:20:19.276 15:00:52 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:20:19.276 15:00:52 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:20:19.276 15:00:52 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:20:19.276 15:00:52 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:20:19.276 15:00:52 -- scripts/common.sh@335 -- # IFS=.-: 00:20:19.276 15:00:52 -- scripts/common.sh@335 -- # read -ra ver1 00:20:19.276 15:00:52 -- scripts/common.sh@336 -- # IFS=.-: 00:20:19.276 15:00:52 -- scripts/common.sh@336 -- # read -ra ver2 00:20:19.276 15:00:52 -- scripts/common.sh@337 -- # local 'op=<' 00:20:19.276 15:00:52 -- scripts/common.sh@339 -- # ver1_l=2 00:20:19.276 15:00:52 -- scripts/common.sh@340 -- # ver2_l=1 00:20:19.276 15:00:52 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:20:19.276 15:00:52 -- scripts/common.sh@343 -- # case "$op" in 00:20:19.276 15:00:52 -- scripts/common.sh@344 -- # : 1 00:20:19.276 15:00:52 -- scripts/common.sh@363 -- # (( v = 0 )) 00:20:19.276 15:00:52 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:19.276 15:00:52 -- scripts/common.sh@364 -- # decimal 1 00:20:19.276 15:00:52 -- scripts/common.sh@352 -- # local d=1 00:20:19.276 15:00:52 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:19.276 15:00:52 -- scripts/common.sh@354 -- # echo 1 00:20:19.276 15:00:52 -- scripts/common.sh@364 -- # ver1[v]=1 00:20:19.276 15:00:52 -- scripts/common.sh@365 -- # decimal 2 00:20:19.276 15:00:52 -- scripts/common.sh@352 -- # local d=2 00:20:19.276 15:00:52 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:20:19.276 15:00:52 -- scripts/common.sh@354 -- # echo 2 00:20:19.276 15:00:52 -- scripts/common.sh@365 -- # ver2[v]=2 00:20:19.276 15:00:52 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:20:19.276 15:00:52 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:20:19.276 15:00:52 -- scripts/common.sh@367 -- # return 0 00:20:19.276 15:00:52 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:20:19.276 15:00:52 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:20:19.276 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:19.276 --rc genhtml_branch_coverage=1 00:20:19.276 --rc genhtml_function_coverage=1 00:20:19.276 --rc genhtml_legend=1 00:20:19.276 --rc geninfo_all_blocks=1 00:20:19.276 --rc geninfo_unexecuted_blocks=1 00:20:19.276 00:20:19.276 ' 00:20:19.276 15:00:52 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:20:19.276 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:19.276 --rc genhtml_branch_coverage=1 00:20:19.276 --rc genhtml_function_coverage=1 00:20:19.276 --rc genhtml_legend=1 00:20:19.276 --rc geninfo_all_blocks=1 00:20:19.276 --rc geninfo_unexecuted_blocks=1 00:20:19.276 00:20:19.276 ' 00:20:19.276 15:00:52 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:20:19.276 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:19.276 --rc genhtml_branch_coverage=1 00:20:19.276 --rc genhtml_function_coverage=1 00:20:19.276 --rc genhtml_legend=1 00:20:19.276 --rc geninfo_all_blocks=1 00:20:19.276 --rc geninfo_unexecuted_blocks=1 00:20:19.276 00:20:19.276 ' 00:20:19.276 15:00:52 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:20:19.276 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:19.276 --rc genhtml_branch_coverage=1 00:20:19.276 --rc genhtml_function_coverage=1 00:20:19.276 --rc genhtml_legend=1 00:20:19.276 --rc geninfo_all_blocks=1 00:20:19.276 --rc geninfo_unexecuted_blocks=1 00:20:19.276 00:20:19.276 ' 00:20:19.276 15:00:52 -- host/async_init.sh@11 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:20:19.276 15:00:52 -- nvmf/common.sh@7 -- # uname -s 00:20:19.276 15:00:52 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:19.276 15:00:52 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:19.276 15:00:52 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:19.276 15:00:52 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:19.276 15:00:52 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:19.276 15:00:52 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:19.276 15:00:52 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:19.276 15:00:52 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:19.276 15:00:52 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:19.276 15:00:52 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:19.276 15:00:52 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:2d843004-a791-47f3-8dd7-3d04462c368b 00:20:19.276 15:00:52 -- nvmf/common.sh@18 -- # NVME_HOSTID=2d843004-a791-47f3-8dd7-3d04462c368b 00:20:19.276 15:00:52 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:19.276 15:00:52 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:19.276 15:00:52 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:20:19.276 15:00:52 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:20:19.276 15:00:52 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:19.276 15:00:52 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:19.276 15:00:52 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:19.276 15:00:52 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:19.276 15:00:52 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:19.277 15:00:52 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:19.277 15:00:52 -- paths/export.sh@5 -- # export PATH 00:20:19.277 15:00:52 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:19.277 15:00:52 -- nvmf/common.sh@46 -- # : 0 00:20:19.277 15:00:52 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:20:19.277 15:00:52 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:20:19.277 15:00:52 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:20:19.277 15:00:52 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:19.277 15:00:52 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:19.277 15:00:52 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:20:19.277 15:00:52 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:20:19.277 15:00:52 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:20:19.277 15:00:52 -- host/async_init.sh@13 -- # null_bdev_size=1024 00:20:19.277 15:00:52 -- host/async_init.sh@14 -- # null_block_size=512 00:20:19.277 15:00:52 -- host/async_init.sh@15 -- # null_bdev=null0 00:20:19.277 15:00:52 -- host/async_init.sh@16 -- # nvme_bdev=nvme0 00:20:19.277 15:00:52 -- host/async_init.sh@20 -- # uuidgen 00:20:19.277 15:00:52 -- host/async_init.sh@20 -- # tr -d - 00:20:19.277 15:00:52 -- host/async_init.sh@20 -- # nguid=a6c243a070664cb4be532615e92e8996 00:20:19.277 15:00:52 -- host/async_init.sh@22 -- # nvmftestinit 00:20:19.277 15:00:52 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:20:19.277 15:00:52 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:19.277 15:00:52 -- nvmf/common.sh@436 -- # prepare_net_devs 00:20:19.277 15:00:52 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:20:19.277 15:00:52 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:20:19.277 15:00:52 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:19.277 15:00:52 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:19.277 15:00:52 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:19.277 15:00:52 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:20:19.277 15:00:52 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:20:19.277 15:00:52 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:20:19.277 15:00:52 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:20:19.277 15:00:52 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:20:19.277 15:00:52 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:20:19.277 15:00:52 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:19.277 15:00:52 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:19.277 15:00:52 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:20:19.277 15:00:52 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:20:19.277 15:00:52 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:20:19.277 15:00:52 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:20:19.277 15:00:52 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:20:19.277 15:00:52 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:19.277 15:00:52 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:20:19.277 15:00:52 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:20:19.277 15:00:52 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:20:19.277 15:00:52 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:20:19.277 15:00:52 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:20:19.277 15:00:52 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:20:19.277 Cannot find device "nvmf_tgt_br" 00:20:19.277 15:00:52 -- nvmf/common.sh@154 -- # true 00:20:19.277 15:00:52 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:20:19.277 Cannot find device "nvmf_tgt_br2" 00:20:19.277 15:00:52 -- nvmf/common.sh@155 -- # true 00:20:19.277 15:00:52 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:20:19.277 15:00:52 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:20:19.277 Cannot find device "nvmf_tgt_br" 00:20:19.277 15:00:52 -- nvmf/common.sh@157 -- # true 00:20:19.277 15:00:52 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:20:19.277 Cannot find device "nvmf_tgt_br2" 00:20:19.277 15:00:52 -- nvmf/common.sh@158 -- # true 00:20:19.277 15:00:52 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:20:19.277 15:00:52 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:20:19.277 15:00:52 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:20:19.277 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:20:19.277 15:00:52 -- nvmf/common.sh@161 -- # true 00:20:19.277 15:00:52 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:20:19.277 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:20:19.277 15:00:52 -- nvmf/common.sh@162 -- # true 00:20:19.277 15:00:52 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:20:19.277 15:00:52 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:20:19.277 15:00:52 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:20:19.277 15:00:52 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:20:19.536 15:00:52 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:20:19.536 15:00:52 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:20:19.536 15:00:52 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:20:19.536 15:00:52 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:20:19.536 15:00:52 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:20:19.536 15:00:52 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:20:19.536 15:00:52 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:20:19.536 15:00:52 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:20:19.536 15:00:52 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:20:19.536 15:00:52 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:20:19.536 15:00:52 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:20:19.536 15:00:52 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:20:19.536 15:00:52 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:20:19.536 15:00:52 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:20:19.536 15:00:52 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:20:19.536 15:00:52 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:20:19.536 15:00:52 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:20:19.536 15:00:52 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:20:19.536 15:00:52 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:20:19.536 15:00:52 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:20:19.536 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:19.536 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.054 ms 00:20:19.536 00:20:19.536 --- 10.0.0.2 ping statistics --- 00:20:19.536 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:19.536 rtt min/avg/max/mdev = 0.054/0.054/0.054/0.000 ms 00:20:19.536 15:00:52 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:20:19.536 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:20:19.536 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.074 ms 00:20:19.536 00:20:19.536 --- 10.0.0.3 ping statistics --- 00:20:19.536 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:19.536 rtt min/avg/max/mdev = 0.074/0.074/0.074/0.000 ms 00:20:19.536 15:00:52 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:20:19.536 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:19.536 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.043 ms 00:20:19.536 00:20:19.536 --- 10.0.0.1 ping statistics --- 00:20:19.536 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:19.536 rtt min/avg/max/mdev = 0.043/0.043/0.043/0.000 ms 00:20:19.536 15:00:52 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:19.536 15:00:52 -- nvmf/common.sh@421 -- # return 0 00:20:19.536 15:00:52 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:20:19.536 15:00:52 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:19.536 15:00:52 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:20:19.536 15:00:52 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:20:19.536 15:00:52 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:19.536 15:00:52 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:20:19.536 15:00:52 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:20:19.536 15:00:52 -- host/async_init.sh@23 -- # nvmfappstart -m 0x1 00:20:19.536 15:00:52 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:20:19.536 15:00:52 -- common/autotest_common.sh@722 -- # xtrace_disable 00:20:19.536 15:00:52 -- common/autotest_common.sh@10 -- # set +x 00:20:19.536 15:00:52 -- nvmf/common.sh@469 -- # nvmfpid=93301 00:20:19.536 15:00:52 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:20:19.536 15:00:52 -- nvmf/common.sh@470 -- # waitforlisten 93301 00:20:19.536 15:00:52 -- common/autotest_common.sh@829 -- # '[' -z 93301 ']' 00:20:19.536 15:00:52 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:19.536 15:00:52 -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:19.536 15:00:52 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:19.536 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:19.537 15:00:52 -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:19.537 15:00:52 -- common/autotest_common.sh@10 -- # set +x 00:20:19.537 [2024-12-01 15:00:52.640498] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:20:19.537 [2024-12-01 15:00:52.640772] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:19.796 [2024-12-01 15:00:52.782612] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:19.796 [2024-12-01 15:00:52.847262] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:20:19.796 [2024-12-01 15:00:52.847728] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:19.796 [2024-12-01 15:00:52.847923] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:19.796 [2024-12-01 15:00:52.847949] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:19.796 [2024-12-01 15:00:52.847994] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:20:20.732 15:00:53 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:20.732 15:00:53 -- common/autotest_common.sh@862 -- # return 0 00:20:20.732 15:00:53 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:20:20.732 15:00:53 -- common/autotest_common.sh@728 -- # xtrace_disable 00:20:20.732 15:00:53 -- common/autotest_common.sh@10 -- # set +x 00:20:20.732 15:00:53 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:20.732 15:00:53 -- host/async_init.sh@26 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:20:20.732 15:00:53 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:20.732 15:00:53 -- common/autotest_common.sh@10 -- # set +x 00:20:20.732 [2024-12-01 15:00:53.695187] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:20.732 15:00:53 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:20.732 15:00:53 -- host/async_init.sh@27 -- # rpc_cmd bdev_null_create null0 1024 512 00:20:20.732 15:00:53 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:20.732 15:00:53 -- common/autotest_common.sh@10 -- # set +x 00:20:20.732 null0 00:20:20.732 15:00:53 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:20.732 15:00:53 -- host/async_init.sh@28 -- # rpc_cmd bdev_wait_for_examine 00:20:20.732 15:00:53 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:20.732 15:00:53 -- common/autotest_common.sh@10 -- # set +x 00:20:20.732 15:00:53 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:20.732 15:00:53 -- host/async_init.sh@29 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a 00:20:20.732 15:00:53 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:20.732 15:00:53 -- common/autotest_common.sh@10 -- # set +x 00:20:20.732 15:00:53 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:20.732 15:00:53 -- host/async_init.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 -g a6c243a070664cb4be532615e92e8996 00:20:20.732 15:00:53 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:20.732 15:00:53 -- common/autotest_common.sh@10 -- # set +x 00:20:20.732 15:00:53 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:20.732 15:00:53 -- host/async_init.sh@31 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:20:20.732 15:00:53 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:20.732 15:00:53 -- common/autotest_common.sh@10 -- # set +x 00:20:20.732 [2024-12-01 15:00:53.735263] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:20.732 15:00:53 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:20.732 15:00:53 -- host/async_init.sh@37 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode0 00:20:20.732 15:00:53 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:20.732 15:00:53 -- common/autotest_common.sh@10 -- # set +x 00:20:20.991 nvme0n1 00:20:20.991 15:00:53 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:20.991 15:00:53 -- host/async_init.sh@41 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:20:20.991 15:00:53 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:20.991 15:00:53 -- common/autotest_common.sh@10 -- # set +x 00:20:20.991 [ 00:20:20.991 { 00:20:20.991 "aliases": [ 00:20:20.991 "a6c243a0-7066-4cb4-be53-2615e92e8996" 00:20:20.991 ], 00:20:20.991 "assigned_rate_limits": { 00:20:20.991 "r_mbytes_per_sec": 0, 00:20:20.991 "rw_ios_per_sec": 0, 00:20:20.991 "rw_mbytes_per_sec": 0, 00:20:20.991 "w_mbytes_per_sec": 0 00:20:20.991 }, 00:20:20.991 "block_size": 512, 00:20:20.991 "claimed": false, 00:20:20.991 "driver_specific": { 00:20:20.991 "mp_policy": "active_passive", 00:20:20.991 "nvme": [ 00:20:20.991 { 00:20:20.991 "ctrlr_data": { 00:20:20.991 "ana_reporting": false, 00:20:20.991 "cntlid": 1, 00:20:20.991 "firmware_revision": "24.01.1", 00:20:20.991 "model_number": "SPDK bdev Controller", 00:20:20.991 "multi_ctrlr": true, 00:20:20.991 "oacs": { 00:20:20.991 "firmware": 0, 00:20:20.991 "format": 0, 00:20:20.991 "ns_manage": 0, 00:20:20.991 "security": 0 00:20:20.991 }, 00:20:20.991 "serial_number": "00000000000000000000", 00:20:20.991 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:20:20.991 "vendor_id": "0x8086" 00:20:20.991 }, 00:20:20.991 "ns_data": { 00:20:20.991 "can_share": true, 00:20:20.991 "id": 1 00:20:20.991 }, 00:20:20.991 "trid": { 00:20:20.991 "adrfam": "IPv4", 00:20:20.991 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:20:20.991 "traddr": "10.0.0.2", 00:20:20.991 "trsvcid": "4420", 00:20:20.991 "trtype": "TCP" 00:20:20.991 }, 00:20:20.991 "vs": { 00:20:20.991 "nvme_version": "1.3" 00:20:20.991 } 00:20:20.991 } 00:20:20.991 ] 00:20:20.991 }, 00:20:20.991 "name": "nvme0n1", 00:20:20.991 "num_blocks": 2097152, 00:20:20.991 "product_name": "NVMe disk", 00:20:20.991 "supported_io_types": { 00:20:20.991 "abort": true, 00:20:20.991 "compare": true, 00:20:20.991 "compare_and_write": true, 00:20:20.991 "flush": true, 00:20:20.991 "nvme_admin": true, 00:20:20.991 "nvme_io": true, 00:20:20.991 "read": true, 00:20:20.991 "reset": true, 00:20:20.991 "unmap": false, 00:20:20.991 "write": true, 00:20:20.991 "write_zeroes": true 00:20:20.991 }, 00:20:20.991 "uuid": "a6c243a0-7066-4cb4-be53-2615e92e8996", 00:20:20.991 "zoned": false 00:20:20.991 } 00:20:20.991 ] 00:20:20.991 15:00:53 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:20.991 15:00:53 -- host/async_init.sh@44 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:20:20.991 15:00:53 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:20.991 15:00:53 -- common/autotest_common.sh@10 -- # set +x 00:20:20.991 [2024-12-01 15:00:53.995204] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:20:20.991 [2024-12-01 15:00:53.995280] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19c9a00 (9): Bad file descriptor 00:20:21.250 [2024-12-01 15:00:54.126857] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:20:21.250 15:00:54 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:21.250 15:00:54 -- host/async_init.sh@47 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:20:21.250 15:00:54 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:21.250 15:00:54 -- common/autotest_common.sh@10 -- # set +x 00:20:21.250 [ 00:20:21.250 { 00:20:21.250 "aliases": [ 00:20:21.250 "a6c243a0-7066-4cb4-be53-2615e92e8996" 00:20:21.250 ], 00:20:21.250 "assigned_rate_limits": { 00:20:21.250 "r_mbytes_per_sec": 0, 00:20:21.250 "rw_ios_per_sec": 0, 00:20:21.250 "rw_mbytes_per_sec": 0, 00:20:21.250 "w_mbytes_per_sec": 0 00:20:21.250 }, 00:20:21.250 "block_size": 512, 00:20:21.250 "claimed": false, 00:20:21.250 "driver_specific": { 00:20:21.250 "mp_policy": "active_passive", 00:20:21.250 "nvme": [ 00:20:21.250 { 00:20:21.250 "ctrlr_data": { 00:20:21.250 "ana_reporting": false, 00:20:21.250 "cntlid": 2, 00:20:21.250 "firmware_revision": "24.01.1", 00:20:21.250 "model_number": "SPDK bdev Controller", 00:20:21.250 "multi_ctrlr": true, 00:20:21.250 "oacs": { 00:20:21.250 "firmware": 0, 00:20:21.250 "format": 0, 00:20:21.250 "ns_manage": 0, 00:20:21.250 "security": 0 00:20:21.250 }, 00:20:21.250 "serial_number": "00000000000000000000", 00:20:21.250 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:20:21.250 "vendor_id": "0x8086" 00:20:21.250 }, 00:20:21.250 "ns_data": { 00:20:21.250 "can_share": true, 00:20:21.250 "id": 1 00:20:21.250 }, 00:20:21.250 "trid": { 00:20:21.250 "adrfam": "IPv4", 00:20:21.250 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:20:21.250 "traddr": "10.0.0.2", 00:20:21.250 "trsvcid": "4420", 00:20:21.250 "trtype": "TCP" 00:20:21.250 }, 00:20:21.250 "vs": { 00:20:21.250 "nvme_version": "1.3" 00:20:21.250 } 00:20:21.250 } 00:20:21.250 ] 00:20:21.250 }, 00:20:21.250 "name": "nvme0n1", 00:20:21.250 "num_blocks": 2097152, 00:20:21.250 "product_name": "NVMe disk", 00:20:21.250 "supported_io_types": { 00:20:21.250 "abort": true, 00:20:21.250 "compare": true, 00:20:21.250 "compare_and_write": true, 00:20:21.250 "flush": true, 00:20:21.250 "nvme_admin": true, 00:20:21.250 "nvme_io": true, 00:20:21.250 "read": true, 00:20:21.250 "reset": true, 00:20:21.250 "unmap": false, 00:20:21.250 "write": true, 00:20:21.250 "write_zeroes": true 00:20:21.250 }, 00:20:21.250 "uuid": "a6c243a0-7066-4cb4-be53-2615e92e8996", 00:20:21.250 "zoned": false 00:20:21.250 } 00:20:21.250 ] 00:20:21.250 15:00:54 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:21.250 15:00:54 -- host/async_init.sh@50 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:21.250 15:00:54 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:21.250 15:00:54 -- common/autotest_common.sh@10 -- # set +x 00:20:21.250 15:00:54 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:21.250 15:00:54 -- host/async_init.sh@53 -- # mktemp 00:20:21.250 15:00:54 -- host/async_init.sh@53 -- # key_path=/tmp/tmp.quiuoHBaD8 00:20:21.250 15:00:54 -- host/async_init.sh@54 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:20:21.250 15:00:54 -- host/async_init.sh@55 -- # chmod 0600 /tmp/tmp.quiuoHBaD8 00:20:21.250 15:00:54 -- host/async_init.sh@56 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode0 --disable 00:20:21.250 15:00:54 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:21.250 15:00:54 -- common/autotest_common.sh@10 -- # set +x 00:20:21.250 15:00:54 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:21.250 15:00:54 -- host/async_init.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 --secure-channel 00:20:21.250 15:00:54 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:21.250 15:00:54 -- common/autotest_common.sh@10 -- # set +x 00:20:21.250 [2024-12-01 15:00:54.187330] tcp.c: 914:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:20:21.250 [2024-12-01 15:00:54.187441] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:20:21.250 15:00:54 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:21.251 15:00:54 -- host/async_init.sh@59 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.quiuoHBaD8 00:20:21.251 15:00:54 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:21.251 15:00:54 -- common/autotest_common.sh@10 -- # set +x 00:20:21.251 15:00:54 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:21.251 15:00:54 -- host/async_init.sh@65 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4421 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.quiuoHBaD8 00:20:21.251 15:00:54 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:21.251 15:00:54 -- common/autotest_common.sh@10 -- # set +x 00:20:21.251 [2024-12-01 15:00:54.203334] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:21.251 nvme0n1 00:20:21.251 15:00:54 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:21.251 15:00:54 -- host/async_init.sh@69 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:20:21.251 15:00:54 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:21.251 15:00:54 -- common/autotest_common.sh@10 -- # set +x 00:20:21.251 [ 00:20:21.251 { 00:20:21.251 "aliases": [ 00:20:21.251 "a6c243a0-7066-4cb4-be53-2615e92e8996" 00:20:21.251 ], 00:20:21.251 "assigned_rate_limits": { 00:20:21.251 "r_mbytes_per_sec": 0, 00:20:21.251 "rw_ios_per_sec": 0, 00:20:21.251 "rw_mbytes_per_sec": 0, 00:20:21.251 "w_mbytes_per_sec": 0 00:20:21.251 }, 00:20:21.251 "block_size": 512, 00:20:21.251 "claimed": false, 00:20:21.251 "driver_specific": { 00:20:21.251 "mp_policy": "active_passive", 00:20:21.251 "nvme": [ 00:20:21.251 { 00:20:21.251 "ctrlr_data": { 00:20:21.251 "ana_reporting": false, 00:20:21.251 "cntlid": 3, 00:20:21.251 "firmware_revision": "24.01.1", 00:20:21.251 "model_number": "SPDK bdev Controller", 00:20:21.251 "multi_ctrlr": true, 00:20:21.251 "oacs": { 00:20:21.251 "firmware": 0, 00:20:21.251 "format": 0, 00:20:21.251 "ns_manage": 0, 00:20:21.251 "security": 0 00:20:21.251 }, 00:20:21.251 "serial_number": "00000000000000000000", 00:20:21.251 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:20:21.251 "vendor_id": "0x8086" 00:20:21.251 }, 00:20:21.251 "ns_data": { 00:20:21.251 "can_share": true, 00:20:21.251 "id": 1 00:20:21.251 }, 00:20:21.251 "trid": { 00:20:21.251 "adrfam": "IPv4", 00:20:21.251 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:20:21.251 "traddr": "10.0.0.2", 00:20:21.251 "trsvcid": "4421", 00:20:21.251 "trtype": "TCP" 00:20:21.251 }, 00:20:21.251 "vs": { 00:20:21.251 "nvme_version": "1.3" 00:20:21.251 } 00:20:21.251 } 00:20:21.251 ] 00:20:21.251 }, 00:20:21.251 "name": "nvme0n1", 00:20:21.251 "num_blocks": 2097152, 00:20:21.251 "product_name": "NVMe disk", 00:20:21.251 "supported_io_types": { 00:20:21.251 "abort": true, 00:20:21.251 "compare": true, 00:20:21.251 "compare_and_write": true, 00:20:21.251 "flush": true, 00:20:21.251 "nvme_admin": true, 00:20:21.251 "nvme_io": true, 00:20:21.251 "read": true, 00:20:21.251 "reset": true, 00:20:21.251 "unmap": false, 00:20:21.251 "write": true, 00:20:21.251 "write_zeroes": true 00:20:21.251 }, 00:20:21.251 "uuid": "a6c243a0-7066-4cb4-be53-2615e92e8996", 00:20:21.251 "zoned": false 00:20:21.251 } 00:20:21.251 ] 00:20:21.251 15:00:54 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:21.251 15:00:54 -- host/async_init.sh@72 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:21.251 15:00:54 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:21.251 15:00:54 -- common/autotest_common.sh@10 -- # set +x 00:20:21.251 15:00:54 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:21.251 15:00:54 -- host/async_init.sh@75 -- # rm -f /tmp/tmp.quiuoHBaD8 00:20:21.251 15:00:54 -- host/async_init.sh@77 -- # trap - SIGINT SIGTERM EXIT 00:20:21.251 15:00:54 -- host/async_init.sh@78 -- # nvmftestfini 00:20:21.251 15:00:54 -- nvmf/common.sh@476 -- # nvmfcleanup 00:20:21.251 15:00:54 -- nvmf/common.sh@116 -- # sync 00:20:21.251 15:00:54 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:20:21.251 15:00:54 -- nvmf/common.sh@119 -- # set +e 00:20:21.251 15:00:54 -- nvmf/common.sh@120 -- # for i in {1..20} 00:20:21.251 15:00:54 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:20:21.510 rmmod nvme_tcp 00:20:21.510 rmmod nvme_fabrics 00:20:21.510 rmmod nvme_keyring 00:20:21.510 15:00:54 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:20:21.510 15:00:54 -- nvmf/common.sh@123 -- # set -e 00:20:21.510 15:00:54 -- nvmf/common.sh@124 -- # return 0 00:20:21.510 15:00:54 -- nvmf/common.sh@477 -- # '[' -n 93301 ']' 00:20:21.510 15:00:54 -- nvmf/common.sh@478 -- # killprocess 93301 00:20:21.510 15:00:54 -- common/autotest_common.sh@936 -- # '[' -z 93301 ']' 00:20:21.510 15:00:54 -- common/autotest_common.sh@940 -- # kill -0 93301 00:20:21.510 15:00:54 -- common/autotest_common.sh@941 -- # uname 00:20:21.510 15:00:54 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:20:21.510 15:00:54 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 93301 00:20:21.510 killing process with pid 93301 00:20:21.510 15:00:54 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:20:21.510 15:00:54 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:20:21.510 15:00:54 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 93301' 00:20:21.510 15:00:54 -- common/autotest_common.sh@955 -- # kill 93301 00:20:21.510 15:00:54 -- common/autotest_common.sh@960 -- # wait 93301 00:20:21.510 15:00:54 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:20:21.510 15:00:54 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:20:21.510 15:00:54 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:20:21.510 15:00:54 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:20:21.510 15:00:54 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:20:21.769 15:00:54 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:21.769 15:00:54 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:21.769 15:00:54 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:21.769 15:00:54 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:20:21.769 00:20:21.769 real 0m2.649s 00:20:21.769 user 0m2.476s 00:20:21.769 sys 0m0.647s 00:20:21.769 15:00:54 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:20:21.769 15:00:54 -- common/autotest_common.sh@10 -- # set +x 00:20:21.769 ************************************ 00:20:21.769 END TEST nvmf_async_init 00:20:21.769 ************************************ 00:20:21.769 15:00:54 -- nvmf/nvmf.sh@94 -- # run_test dma /home/vagrant/spdk_repo/spdk/test/nvmf/host/dma.sh --transport=tcp 00:20:21.769 15:00:54 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:20:21.769 15:00:54 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:20:21.769 15:00:54 -- common/autotest_common.sh@10 -- # set +x 00:20:21.769 ************************************ 00:20:21.769 START TEST dma 00:20:21.769 ************************************ 00:20:21.769 15:00:54 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/dma.sh --transport=tcp 00:20:21.769 * Looking for test storage... 00:20:21.769 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:20:21.769 15:00:54 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:20:21.769 15:00:54 -- common/autotest_common.sh@1690 -- # lcov --version 00:20:21.769 15:00:54 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:20:22.029 15:00:54 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:20:22.029 15:00:54 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:20:22.029 15:00:54 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:20:22.029 15:00:54 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:20:22.029 15:00:54 -- scripts/common.sh@335 -- # IFS=.-: 00:20:22.029 15:00:54 -- scripts/common.sh@335 -- # read -ra ver1 00:20:22.029 15:00:54 -- scripts/common.sh@336 -- # IFS=.-: 00:20:22.029 15:00:54 -- scripts/common.sh@336 -- # read -ra ver2 00:20:22.029 15:00:54 -- scripts/common.sh@337 -- # local 'op=<' 00:20:22.029 15:00:54 -- scripts/common.sh@339 -- # ver1_l=2 00:20:22.029 15:00:54 -- scripts/common.sh@340 -- # ver2_l=1 00:20:22.029 15:00:54 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:20:22.029 15:00:54 -- scripts/common.sh@343 -- # case "$op" in 00:20:22.029 15:00:54 -- scripts/common.sh@344 -- # : 1 00:20:22.029 15:00:54 -- scripts/common.sh@363 -- # (( v = 0 )) 00:20:22.029 15:00:54 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:22.029 15:00:54 -- scripts/common.sh@364 -- # decimal 1 00:20:22.029 15:00:54 -- scripts/common.sh@352 -- # local d=1 00:20:22.029 15:00:54 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:22.029 15:00:54 -- scripts/common.sh@354 -- # echo 1 00:20:22.029 15:00:54 -- scripts/common.sh@364 -- # ver1[v]=1 00:20:22.029 15:00:54 -- scripts/common.sh@365 -- # decimal 2 00:20:22.029 15:00:54 -- scripts/common.sh@352 -- # local d=2 00:20:22.029 15:00:54 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:20:22.029 15:00:54 -- scripts/common.sh@354 -- # echo 2 00:20:22.029 15:00:54 -- scripts/common.sh@365 -- # ver2[v]=2 00:20:22.029 15:00:54 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:20:22.029 15:00:54 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:20:22.029 15:00:54 -- scripts/common.sh@367 -- # return 0 00:20:22.029 15:00:54 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:20:22.029 15:00:54 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:20:22.029 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:22.029 --rc genhtml_branch_coverage=1 00:20:22.029 --rc genhtml_function_coverage=1 00:20:22.030 --rc genhtml_legend=1 00:20:22.030 --rc geninfo_all_blocks=1 00:20:22.030 --rc geninfo_unexecuted_blocks=1 00:20:22.030 00:20:22.030 ' 00:20:22.030 15:00:54 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:20:22.030 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:22.030 --rc genhtml_branch_coverage=1 00:20:22.030 --rc genhtml_function_coverage=1 00:20:22.030 --rc genhtml_legend=1 00:20:22.030 --rc geninfo_all_blocks=1 00:20:22.030 --rc geninfo_unexecuted_blocks=1 00:20:22.030 00:20:22.030 ' 00:20:22.030 15:00:54 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:20:22.030 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:22.030 --rc genhtml_branch_coverage=1 00:20:22.030 --rc genhtml_function_coverage=1 00:20:22.030 --rc genhtml_legend=1 00:20:22.030 --rc geninfo_all_blocks=1 00:20:22.030 --rc geninfo_unexecuted_blocks=1 00:20:22.030 00:20:22.030 ' 00:20:22.030 15:00:54 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:20:22.030 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:22.030 --rc genhtml_branch_coverage=1 00:20:22.030 --rc genhtml_function_coverage=1 00:20:22.030 --rc genhtml_legend=1 00:20:22.030 --rc geninfo_all_blocks=1 00:20:22.030 --rc geninfo_unexecuted_blocks=1 00:20:22.030 00:20:22.030 ' 00:20:22.030 15:00:54 -- host/dma.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:20:22.030 15:00:54 -- nvmf/common.sh@7 -- # uname -s 00:20:22.030 15:00:54 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:22.030 15:00:54 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:22.030 15:00:54 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:22.030 15:00:54 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:22.030 15:00:54 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:22.030 15:00:54 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:22.030 15:00:54 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:22.030 15:00:54 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:22.030 15:00:54 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:22.030 15:00:54 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:22.030 15:00:54 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:2d843004-a791-47f3-8dd7-3d04462c368b 00:20:22.030 15:00:54 -- nvmf/common.sh@18 -- # NVME_HOSTID=2d843004-a791-47f3-8dd7-3d04462c368b 00:20:22.030 15:00:54 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:22.030 15:00:54 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:22.030 15:00:54 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:20:22.030 15:00:54 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:20:22.030 15:00:54 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:22.030 15:00:54 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:22.030 15:00:54 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:22.030 15:00:54 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:22.030 15:00:54 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:22.030 15:00:54 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:22.030 15:00:54 -- paths/export.sh@5 -- # export PATH 00:20:22.030 15:00:54 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:22.030 15:00:54 -- nvmf/common.sh@46 -- # : 0 00:20:22.030 15:00:54 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:20:22.030 15:00:54 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:20:22.030 15:00:54 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:20:22.030 15:00:54 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:22.030 15:00:54 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:22.030 15:00:54 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:20:22.030 15:00:54 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:20:22.030 15:00:54 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:20:22.030 15:00:54 -- host/dma.sh@12 -- # '[' tcp '!=' rdma ']' 00:20:22.030 15:00:54 -- host/dma.sh@13 -- # exit 0 00:20:22.030 ************************************ 00:20:22.030 END TEST dma 00:20:22.030 ************************************ 00:20:22.030 00:20:22.030 real 0m0.225s 00:20:22.030 user 0m0.135s 00:20:22.030 sys 0m0.094s 00:20:22.030 15:00:54 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:20:22.030 15:00:54 -- common/autotest_common.sh@10 -- # set +x 00:20:22.030 15:00:54 -- nvmf/nvmf.sh@97 -- # run_test nvmf_identify /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify.sh --transport=tcp 00:20:22.030 15:00:54 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:20:22.030 15:00:54 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:20:22.030 15:00:54 -- common/autotest_common.sh@10 -- # set +x 00:20:22.030 ************************************ 00:20:22.030 START TEST nvmf_identify 00:20:22.030 ************************************ 00:20:22.030 15:00:54 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify.sh --transport=tcp 00:20:22.030 * Looking for test storage... 00:20:22.030 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:20:22.030 15:00:55 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:20:22.030 15:00:55 -- common/autotest_common.sh@1690 -- # lcov --version 00:20:22.030 15:00:55 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:20:22.289 15:00:55 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:20:22.289 15:00:55 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:20:22.289 15:00:55 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:20:22.289 15:00:55 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:20:22.289 15:00:55 -- scripts/common.sh@335 -- # IFS=.-: 00:20:22.289 15:00:55 -- scripts/common.sh@335 -- # read -ra ver1 00:20:22.289 15:00:55 -- scripts/common.sh@336 -- # IFS=.-: 00:20:22.289 15:00:55 -- scripts/common.sh@336 -- # read -ra ver2 00:20:22.289 15:00:55 -- scripts/common.sh@337 -- # local 'op=<' 00:20:22.289 15:00:55 -- scripts/common.sh@339 -- # ver1_l=2 00:20:22.289 15:00:55 -- scripts/common.sh@340 -- # ver2_l=1 00:20:22.289 15:00:55 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:20:22.289 15:00:55 -- scripts/common.sh@343 -- # case "$op" in 00:20:22.289 15:00:55 -- scripts/common.sh@344 -- # : 1 00:20:22.289 15:00:55 -- scripts/common.sh@363 -- # (( v = 0 )) 00:20:22.289 15:00:55 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:22.289 15:00:55 -- scripts/common.sh@364 -- # decimal 1 00:20:22.289 15:00:55 -- scripts/common.sh@352 -- # local d=1 00:20:22.289 15:00:55 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:22.289 15:00:55 -- scripts/common.sh@354 -- # echo 1 00:20:22.289 15:00:55 -- scripts/common.sh@364 -- # ver1[v]=1 00:20:22.289 15:00:55 -- scripts/common.sh@365 -- # decimal 2 00:20:22.289 15:00:55 -- scripts/common.sh@352 -- # local d=2 00:20:22.289 15:00:55 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:20:22.289 15:00:55 -- scripts/common.sh@354 -- # echo 2 00:20:22.289 15:00:55 -- scripts/common.sh@365 -- # ver2[v]=2 00:20:22.289 15:00:55 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:20:22.289 15:00:55 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:20:22.289 15:00:55 -- scripts/common.sh@367 -- # return 0 00:20:22.289 15:00:55 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:20:22.289 15:00:55 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:20:22.289 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:22.289 --rc genhtml_branch_coverage=1 00:20:22.289 --rc genhtml_function_coverage=1 00:20:22.289 --rc genhtml_legend=1 00:20:22.289 --rc geninfo_all_blocks=1 00:20:22.289 --rc geninfo_unexecuted_blocks=1 00:20:22.289 00:20:22.289 ' 00:20:22.289 15:00:55 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:20:22.289 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:22.289 --rc genhtml_branch_coverage=1 00:20:22.289 --rc genhtml_function_coverage=1 00:20:22.289 --rc genhtml_legend=1 00:20:22.289 --rc geninfo_all_blocks=1 00:20:22.289 --rc geninfo_unexecuted_blocks=1 00:20:22.289 00:20:22.289 ' 00:20:22.289 15:00:55 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:20:22.289 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:22.289 --rc genhtml_branch_coverage=1 00:20:22.289 --rc genhtml_function_coverage=1 00:20:22.289 --rc genhtml_legend=1 00:20:22.289 --rc geninfo_all_blocks=1 00:20:22.289 --rc geninfo_unexecuted_blocks=1 00:20:22.289 00:20:22.289 ' 00:20:22.289 15:00:55 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:20:22.289 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:22.289 --rc genhtml_branch_coverage=1 00:20:22.289 --rc genhtml_function_coverage=1 00:20:22.289 --rc genhtml_legend=1 00:20:22.289 --rc geninfo_all_blocks=1 00:20:22.289 --rc geninfo_unexecuted_blocks=1 00:20:22.289 00:20:22.289 ' 00:20:22.289 15:00:55 -- host/identify.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:20:22.289 15:00:55 -- nvmf/common.sh@7 -- # uname -s 00:20:22.289 15:00:55 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:22.289 15:00:55 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:22.289 15:00:55 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:22.289 15:00:55 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:22.289 15:00:55 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:22.289 15:00:55 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:22.289 15:00:55 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:22.289 15:00:55 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:22.289 15:00:55 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:22.289 15:00:55 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:22.289 15:00:55 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:2d843004-a791-47f3-8dd7-3d04462c368b 00:20:22.289 15:00:55 -- nvmf/common.sh@18 -- # NVME_HOSTID=2d843004-a791-47f3-8dd7-3d04462c368b 00:20:22.289 15:00:55 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:22.289 15:00:55 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:22.289 15:00:55 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:20:22.289 15:00:55 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:20:22.289 15:00:55 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:22.289 15:00:55 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:22.289 15:00:55 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:22.289 15:00:55 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:22.290 15:00:55 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:22.290 15:00:55 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:22.290 15:00:55 -- paths/export.sh@5 -- # export PATH 00:20:22.290 15:00:55 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:22.290 15:00:55 -- nvmf/common.sh@46 -- # : 0 00:20:22.290 15:00:55 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:20:22.290 15:00:55 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:20:22.290 15:00:55 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:20:22.290 15:00:55 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:22.290 15:00:55 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:22.290 15:00:55 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:20:22.290 15:00:55 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:20:22.290 15:00:55 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:20:22.290 15:00:55 -- host/identify.sh@11 -- # MALLOC_BDEV_SIZE=64 00:20:22.290 15:00:55 -- host/identify.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:20:22.290 15:00:55 -- host/identify.sh@14 -- # nvmftestinit 00:20:22.290 15:00:55 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:20:22.290 15:00:55 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:22.290 15:00:55 -- nvmf/common.sh@436 -- # prepare_net_devs 00:20:22.290 15:00:55 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:20:22.290 15:00:55 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:20:22.290 15:00:55 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:22.290 15:00:55 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:22.290 15:00:55 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:22.290 15:00:55 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:20:22.290 15:00:55 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:20:22.290 15:00:55 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:20:22.290 15:00:55 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:20:22.290 15:00:55 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:20:22.290 15:00:55 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:20:22.290 15:00:55 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:22.290 15:00:55 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:22.290 15:00:55 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:20:22.290 15:00:55 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:20:22.290 15:00:55 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:20:22.290 15:00:55 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:20:22.290 15:00:55 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:20:22.290 15:00:55 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:22.290 15:00:55 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:20:22.290 15:00:55 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:20:22.290 15:00:55 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:20:22.290 15:00:55 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:20:22.290 15:00:55 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:20:22.290 15:00:55 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:20:22.290 Cannot find device "nvmf_tgt_br" 00:20:22.290 15:00:55 -- nvmf/common.sh@154 -- # true 00:20:22.290 15:00:55 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:20:22.290 Cannot find device "nvmf_tgt_br2" 00:20:22.290 15:00:55 -- nvmf/common.sh@155 -- # true 00:20:22.290 15:00:55 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:20:22.290 15:00:55 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:20:22.290 Cannot find device "nvmf_tgt_br" 00:20:22.290 15:00:55 -- nvmf/common.sh@157 -- # true 00:20:22.290 15:00:55 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:20:22.290 Cannot find device "nvmf_tgt_br2" 00:20:22.290 15:00:55 -- nvmf/common.sh@158 -- # true 00:20:22.290 15:00:55 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:20:22.290 15:00:55 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:20:22.290 15:00:55 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:20:22.290 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:20:22.290 15:00:55 -- nvmf/common.sh@161 -- # true 00:20:22.290 15:00:55 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:20:22.290 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:20:22.290 15:00:55 -- nvmf/common.sh@162 -- # true 00:20:22.290 15:00:55 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:20:22.290 15:00:55 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:20:22.290 15:00:55 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:20:22.290 15:00:55 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:20:22.549 15:00:55 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:20:22.549 15:00:55 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:20:22.549 15:00:55 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:20:22.549 15:00:55 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:20:22.549 15:00:55 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:20:22.549 15:00:55 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:20:22.549 15:00:55 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:20:22.549 15:00:55 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:20:22.549 15:00:55 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:20:22.549 15:00:55 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:20:22.549 15:00:55 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:20:22.549 15:00:55 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:20:22.549 15:00:55 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:20:22.549 15:00:55 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:20:22.549 15:00:55 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:20:22.549 15:00:55 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:20:22.549 15:00:55 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:20:22.549 15:00:55 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:20:22.549 15:00:55 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:20:22.549 15:00:55 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:20:22.549 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:22.549 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.087 ms 00:20:22.549 00:20:22.549 --- 10.0.0.2 ping statistics --- 00:20:22.549 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:22.549 rtt min/avg/max/mdev = 0.087/0.087/0.087/0.000 ms 00:20:22.549 15:00:55 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:20:22.549 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:20:22.549 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.053 ms 00:20:22.549 00:20:22.549 --- 10.0.0.3 ping statistics --- 00:20:22.549 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:22.549 rtt min/avg/max/mdev = 0.053/0.053/0.053/0.000 ms 00:20:22.549 15:00:55 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:20:22.549 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:22.549 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.030 ms 00:20:22.549 00:20:22.549 --- 10.0.0.1 ping statistics --- 00:20:22.549 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:22.549 rtt min/avg/max/mdev = 0.030/0.030/0.030/0.000 ms 00:20:22.549 15:00:55 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:22.549 15:00:55 -- nvmf/common.sh@421 -- # return 0 00:20:22.549 15:00:55 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:20:22.549 15:00:55 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:22.549 15:00:55 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:20:22.549 15:00:55 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:20:22.549 15:00:55 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:22.549 15:00:55 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:20:22.549 15:00:55 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:20:22.549 15:00:55 -- host/identify.sh@16 -- # timing_enter start_nvmf_tgt 00:20:22.549 15:00:55 -- common/autotest_common.sh@722 -- # xtrace_disable 00:20:22.549 15:00:55 -- common/autotest_common.sh@10 -- # set +x 00:20:22.549 15:00:55 -- host/identify.sh@19 -- # nvmfpid=93583 00:20:22.549 15:00:55 -- host/identify.sh@18 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:20:22.549 15:00:55 -- host/identify.sh@21 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:20:22.549 15:00:55 -- host/identify.sh@23 -- # waitforlisten 93583 00:20:22.549 15:00:55 -- common/autotest_common.sh@829 -- # '[' -z 93583 ']' 00:20:22.549 15:00:55 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:22.549 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:22.549 15:00:55 -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:22.549 15:00:55 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:22.549 15:00:55 -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:22.549 15:00:55 -- common/autotest_common.sh@10 -- # set +x 00:20:22.809 [2024-12-01 15:00:55.672556] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:20:22.809 [2024-12-01 15:00:55.672652] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:22.809 [2024-12-01 15:00:55.817899] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:20:22.809 [2024-12-01 15:00:55.886733] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:20:22.809 [2024-12-01 15:00:55.886938] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:22.809 [2024-12-01 15:00:55.886955] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:22.809 [2024-12-01 15:00:55.886966] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:22.809 [2024-12-01 15:00:55.887168] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:20:22.809 [2024-12-01 15:00:55.887336] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:20:22.809 [2024-12-01 15:00:55.888069] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:20:22.809 [2024-12-01 15:00:55.888125] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:20:23.746 15:00:56 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:23.746 15:00:56 -- common/autotest_common.sh@862 -- # return 0 00:20:23.746 15:00:56 -- host/identify.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:20:23.746 15:00:56 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:23.746 15:00:56 -- common/autotest_common.sh@10 -- # set +x 00:20:23.746 [2024-12-01 15:00:56.706520] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:23.746 15:00:56 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:23.746 15:00:56 -- host/identify.sh@25 -- # timing_exit start_nvmf_tgt 00:20:23.746 15:00:56 -- common/autotest_common.sh@728 -- # xtrace_disable 00:20:23.746 15:00:56 -- common/autotest_common.sh@10 -- # set +x 00:20:23.746 15:00:56 -- host/identify.sh@27 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:20:23.746 15:00:56 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:23.746 15:00:56 -- common/autotest_common.sh@10 -- # set +x 00:20:23.746 Malloc0 00:20:23.746 15:00:56 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:23.746 15:00:56 -- host/identify.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:20:23.746 15:00:56 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:23.746 15:00:56 -- common/autotest_common.sh@10 -- # set +x 00:20:23.746 15:00:56 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:23.746 15:00:56 -- host/identify.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 --nguid ABCDEF0123456789ABCDEF0123456789 --eui64 ABCDEF0123456789 00:20:23.746 15:00:56 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:23.746 15:00:56 -- common/autotest_common.sh@10 -- # set +x 00:20:23.747 15:00:56 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:23.747 15:00:56 -- host/identify.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:20:23.747 15:00:56 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:23.747 15:00:56 -- common/autotest_common.sh@10 -- # set +x 00:20:23.747 [2024-12-01 15:00:56.811067] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:23.747 15:00:56 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:23.747 15:00:56 -- host/identify.sh@35 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:20:23.747 15:00:56 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:23.747 15:00:56 -- common/autotest_common.sh@10 -- # set +x 00:20:23.747 15:00:56 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:23.747 15:00:56 -- host/identify.sh@37 -- # rpc_cmd nvmf_get_subsystems 00:20:23.747 15:00:56 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:23.747 15:00:56 -- common/autotest_common.sh@10 -- # set +x 00:20:23.747 [2024-12-01 15:00:56.826815] nvmf_rpc.c: 275:rpc_nvmf_get_subsystems: *WARNING*: rpc_nvmf_get_subsystems: deprecated feature listener.transport is deprecated in favor of trtype to be removed in v24.05 00:20:23.747 [ 00:20:23.747 { 00:20:23.747 "allow_any_host": true, 00:20:23.747 "hosts": [], 00:20:23.747 "listen_addresses": [ 00:20:23.747 { 00:20:23.747 "adrfam": "IPv4", 00:20:23.747 "traddr": "10.0.0.2", 00:20:23.747 "transport": "TCP", 00:20:23.747 "trsvcid": "4420", 00:20:23.747 "trtype": "TCP" 00:20:23.747 } 00:20:23.747 ], 00:20:23.747 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:20:23.747 "subtype": "Discovery" 00:20:23.747 }, 00:20:23.747 { 00:20:23.747 "allow_any_host": true, 00:20:23.747 "hosts": [], 00:20:23.747 "listen_addresses": [ 00:20:23.747 { 00:20:23.747 "adrfam": "IPv4", 00:20:23.747 "traddr": "10.0.0.2", 00:20:23.747 "transport": "TCP", 00:20:23.747 "trsvcid": "4420", 00:20:23.747 "trtype": "TCP" 00:20:23.747 } 00:20:23.747 ], 00:20:23.747 "max_cntlid": 65519, 00:20:23.747 "max_namespaces": 32, 00:20:23.747 "min_cntlid": 1, 00:20:23.747 "model_number": "SPDK bdev Controller", 00:20:23.747 "namespaces": [ 00:20:23.747 { 00:20:23.747 "bdev_name": "Malloc0", 00:20:23.747 "eui64": "ABCDEF0123456789", 00:20:23.747 "name": "Malloc0", 00:20:23.747 "nguid": "ABCDEF0123456789ABCDEF0123456789", 00:20:23.747 "nsid": 1, 00:20:23.747 "uuid": "0add5ce0-4977-477a-a88d-f49ea2a3b0d0" 00:20:23.747 } 00:20:23.747 ], 00:20:23.747 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:23.747 "serial_number": "SPDK00000000000001", 00:20:23.747 "subtype": "NVMe" 00:20:23.747 } 00:20:23.747 ] 00:20:23.747 15:00:56 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:23.747 15:00:56 -- host/identify.sh@39 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' -L all 00:20:24.007 [2024-12-01 15:00:56.867020] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:20:24.007 [2024-12-01 15:00:56.867086] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid93637 ] 00:20:24.007 [2024-12-01 15:00:57.006718] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to connect adminq (no timeout) 00:20:24.007 [2024-12-01 15:00:57.006790] nvme_tcp.c:2244:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:20:24.007 [2024-12-01 15:00:57.006797] nvme_tcp.c:2248:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:20:24.007 [2024-12-01 15:00:57.006806] nvme_tcp.c:2266:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:20:24.007 [2024-12-01 15:00:57.006816] sock.c: 334:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:20:24.007 [2024-12-01 15:00:57.006965] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for connect adminq (no timeout) 00:20:24.007 [2024-12-01 15:00:57.007029] nvme_tcp.c:1487:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x1e8f510 0 00:20:24.007 [2024-12-01 15:00:57.021803] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:20:24.007 [2024-12-01 15:00:57.021822] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:20:24.007 [2024-12-01 15:00:57.021839] nvme_tcp.c:1533:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:20:24.007 [2024-12-01 15:00:57.021843] nvme_tcp.c:1534:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:20:24.007 [2024-12-01 15:00:57.021892] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:24.007 [2024-12-01 15:00:57.021899] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:24.007 [2024-12-01 15:00:57.021903] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1e8f510) 00:20:24.007 [2024-12-01 15:00:57.021916] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:20:24.007 [2024-12-01 15:00:57.021944] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1edb8a0, cid 0, qid 0 00:20:24.007 [2024-12-01 15:00:57.029813] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:24.007 [2024-12-01 15:00:57.029830] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:24.007 [2024-12-01 15:00:57.029834] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:24.007 [2024-12-01 15:00:57.029849] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1edb8a0) on tqpair=0x1e8f510 00:20:24.007 [2024-12-01 15:00:57.029863] nvme_fabric.c: 620:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:20:24.007 [2024-12-01 15:00:57.029870] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs (no timeout) 00:20:24.007 [2024-12-01 15:00:57.029875] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs wait for vs (no timeout) 00:20:24.007 [2024-12-01 15:00:57.029890] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:24.007 [2024-12-01 15:00:57.029894] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:24.007 [2024-12-01 15:00:57.029898] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1e8f510) 00:20:24.007 [2024-12-01 15:00:57.029905] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.007 [2024-12-01 15:00:57.029931] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1edb8a0, cid 0, qid 0 00:20:24.007 [2024-12-01 15:00:57.030017] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:24.007 [2024-12-01 15:00:57.030023] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:24.007 [2024-12-01 15:00:57.030026] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:24.008 [2024-12-01 15:00:57.030030] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1edb8a0) on tqpair=0x1e8f510 00:20:24.008 [2024-12-01 15:00:57.030035] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap (no timeout) 00:20:24.008 [2024-12-01 15:00:57.030042] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap wait for cap (no timeout) 00:20:24.008 [2024-12-01 15:00:57.030049] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:24.008 [2024-12-01 15:00:57.030052] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:24.008 [2024-12-01 15:00:57.030056] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1e8f510) 00:20:24.008 [2024-12-01 15:00:57.030062] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.008 [2024-12-01 15:00:57.030092] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1edb8a0, cid 0, qid 0 00:20:24.008 [2024-12-01 15:00:57.030183] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:24.008 [2024-12-01 15:00:57.030188] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:24.008 [2024-12-01 15:00:57.030192] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:24.008 [2024-12-01 15:00:57.030197] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1edb8a0) on tqpair=0x1e8f510 00:20:24.008 [2024-12-01 15:00:57.030203] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en (no timeout) 00:20:24.008 [2024-12-01 15:00:57.030210] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en wait for cc (timeout 15000 ms) 00:20:24.008 [2024-12-01 15:00:57.030217] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:24.008 [2024-12-01 15:00:57.030220] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:24.008 [2024-12-01 15:00:57.030224] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1e8f510) 00:20:24.008 [2024-12-01 15:00:57.030230] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.008 [2024-12-01 15:00:57.030246] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1edb8a0, cid 0, qid 0 00:20:24.008 [2024-12-01 15:00:57.030314] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:24.008 [2024-12-01 15:00:57.030320] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:24.008 [2024-12-01 15:00:57.030323] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:24.008 [2024-12-01 15:00:57.030326] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1edb8a0) on tqpair=0x1e8f510 00:20:24.008 [2024-12-01 15:00:57.030332] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:20:24.008 [2024-12-01 15:00:57.030341] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:24.008 [2024-12-01 15:00:57.030344] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:24.008 [2024-12-01 15:00:57.030348] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1e8f510) 00:20:24.008 [2024-12-01 15:00:57.030354] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.008 [2024-12-01 15:00:57.030370] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1edb8a0, cid 0, qid 0 00:20:24.008 [2024-12-01 15:00:57.030438] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:24.008 [2024-12-01 15:00:57.030444] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:24.008 [2024-12-01 15:00:57.030447] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:24.008 [2024-12-01 15:00:57.030450] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1edb8a0) on tqpair=0x1e8f510 00:20:24.008 [2024-12-01 15:00:57.030456] nvme_ctrlr.c:3737:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 0 && CSTS.RDY = 0 00:20:24.008 [2024-12-01 15:00:57.030461] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to controller is disabled (timeout 15000 ms) 00:20:24.008 [2024-12-01 15:00:57.030467] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:20:24.008 [2024-12-01 15:00:57.030572] nvme_ctrlr.c:3930:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Setting CC.EN = 1 00:20:24.008 [2024-12-01 15:00:57.030576] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:20:24.008 [2024-12-01 15:00:57.030584] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:24.008 [2024-12-01 15:00:57.030588] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:24.008 [2024-12-01 15:00:57.030591] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1e8f510) 00:20:24.008 [2024-12-01 15:00:57.030598] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.008 [2024-12-01 15:00:57.030614] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1edb8a0, cid 0, qid 0 00:20:24.008 [2024-12-01 15:00:57.030684] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:24.008 [2024-12-01 15:00:57.030690] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:24.008 [2024-12-01 15:00:57.030693] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:24.008 [2024-12-01 15:00:57.030696] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1edb8a0) on tqpair=0x1e8f510 00:20:24.008 [2024-12-01 15:00:57.030702] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:20:24.008 [2024-12-01 15:00:57.030710] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:24.008 [2024-12-01 15:00:57.030714] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:24.008 [2024-12-01 15:00:57.030717] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1e8f510) 00:20:24.008 [2024-12-01 15:00:57.030723] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.008 [2024-12-01 15:00:57.030739] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1edb8a0, cid 0, qid 0 00:20:24.008 [2024-12-01 15:00:57.030825] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:24.008 [2024-12-01 15:00:57.030832] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:24.008 [2024-12-01 15:00:57.030835] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:24.008 [2024-12-01 15:00:57.030839] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1edb8a0) on tqpair=0x1e8f510 00:20:24.008 [2024-12-01 15:00:57.030844] nvme_ctrlr.c:3772:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:20:24.008 [2024-12-01 15:00:57.030848] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to reset admin queue (timeout 30000 ms) 00:20:24.008 [2024-12-01 15:00:57.030855] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to identify controller (no timeout) 00:20:24.008 [2024-12-01 15:00:57.030870] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for identify controller (timeout 30000 ms) 00:20:24.008 [2024-12-01 15:00:57.030879] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:24.008 [2024-12-01 15:00:57.030883] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:24.008 [2024-12-01 15:00:57.030886] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1e8f510) 00:20:24.008 [2024-12-01 15:00:57.030893] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.008 [2024-12-01 15:00:57.030912] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1edb8a0, cid 0, qid 0 00:20:24.008 [2024-12-01 15:00:57.031029] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:20:24.008 [2024-12-01 15:00:57.031036] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:20:24.008 [2024-12-01 15:00:57.031039] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:20:24.008 [2024-12-01 15:00:57.031043] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1e8f510): datao=0, datal=4096, cccid=0 00:20:24.008 [2024-12-01 15:00:57.031047] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1edb8a0) on tqpair(0x1e8f510): expected_datao=0, payload_size=4096 00:20:24.008 [2024-12-01 15:00:57.031055] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:20:24.008 [2024-12-01 15:00:57.031059] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:20:24.008 [2024-12-01 15:00:57.031067] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:24.008 [2024-12-01 15:00:57.031072] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:24.008 [2024-12-01 15:00:57.031075] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:24.008 [2024-12-01 15:00:57.031078] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1edb8a0) on tqpair=0x1e8f510 00:20:24.008 [2024-12-01 15:00:57.031086] nvme_ctrlr.c:1972:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_xfer_size 4294967295 00:20:24.008 [2024-12-01 15:00:57.031091] nvme_ctrlr.c:1976:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] MDTS max_xfer_size 131072 00:20:24.008 [2024-12-01 15:00:57.031095] nvme_ctrlr.c:1979:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CNTLID 0x0001 00:20:24.008 [2024-12-01 15:00:57.031100] nvme_ctrlr.c:2003:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_sges 16 00:20:24.008 [2024-12-01 15:00:57.031104] nvme_ctrlr.c:2018:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] fuses compare and write: 1 00:20:24.008 [2024-12-01 15:00:57.031108] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to configure AER (timeout 30000 ms) 00:20:24.008 [2024-12-01 15:00:57.031120] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for configure aer (timeout 30000 ms) 00:20:24.008 [2024-12-01 15:00:57.031129] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:24.008 [2024-12-01 15:00:57.031133] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:24.008 [2024-12-01 15:00:57.031136] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1e8f510) 00:20:24.008 [2024-12-01 15:00:57.031143] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:20:24.008 [2024-12-01 15:00:57.031172] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1edb8a0, cid 0, qid 0 00:20:24.008 [2024-12-01 15:00:57.031259] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:24.008 [2024-12-01 15:00:57.031265] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:24.008 [2024-12-01 15:00:57.031268] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:24.008 [2024-12-01 15:00:57.031271] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1edb8a0) on tqpair=0x1e8f510 00:20:24.008 [2024-12-01 15:00:57.031279] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:24.008 [2024-12-01 15:00:57.031282] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:24.008 [2024-12-01 15:00:57.031286] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1e8f510) 00:20:24.008 [2024-12-01 15:00:57.031291] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:20:24.008 [2024-12-01 15:00:57.031297] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:24.009 [2024-12-01 15:00:57.031300] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:24.009 [2024-12-01 15:00:57.031303] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x1e8f510) 00:20:24.009 [2024-12-01 15:00:57.031308] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:20:24.009 [2024-12-01 15:00:57.031313] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:24.009 [2024-12-01 15:00:57.031316] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:24.009 [2024-12-01 15:00:57.031319] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x1e8f510) 00:20:24.009 [2024-12-01 15:00:57.031324] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:20:24.009 [2024-12-01 15:00:57.031329] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:24.009 [2024-12-01 15:00:57.031332] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:24.009 [2024-12-01 15:00:57.031335] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1e8f510) 00:20:24.009 [2024-12-01 15:00:57.031340] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:20:24.009 [2024-12-01 15:00:57.031345] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to set keep alive timeout (timeout 30000 ms) 00:20:24.009 [2024-12-01 15:00:57.031356] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:20:24.009 [2024-12-01 15:00:57.031362] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:24.009 [2024-12-01 15:00:57.031365] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:24.009 [2024-12-01 15:00:57.031368] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1e8f510) 00:20:24.009 [2024-12-01 15:00:57.031375] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.009 [2024-12-01 15:00:57.031394] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1edb8a0, cid 0, qid 0 00:20:24.009 [2024-12-01 15:00:57.031400] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1edba00, cid 1, qid 0 00:20:24.009 [2024-12-01 15:00:57.031404] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1edbb60, cid 2, qid 0 00:20:24.009 [2024-12-01 15:00:57.031408] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1edbcc0, cid 3, qid 0 00:20:24.009 [2024-12-01 15:00:57.031412] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1edbe20, cid 4, qid 0 00:20:24.009 [2024-12-01 15:00:57.031529] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:24.009 [2024-12-01 15:00:57.031534] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:24.009 [2024-12-01 15:00:57.031537] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:24.009 [2024-12-01 15:00:57.031541] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1edbe20) on tqpair=0x1e8f510 00:20:24.009 [2024-12-01 15:00:57.031548] nvme_ctrlr.c:2890:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Sending keep alive every 5000000 us 00:20:24.009 [2024-12-01 15:00:57.031553] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to ready (no timeout) 00:20:24.009 [2024-12-01 15:00:57.031562] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:24.009 [2024-12-01 15:00:57.031566] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:24.009 [2024-12-01 15:00:57.031569] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1e8f510) 00:20:24.009 [2024-12-01 15:00:57.031575] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.009 [2024-12-01 15:00:57.031592] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1edbe20, cid 4, qid 0 00:20:24.009 [2024-12-01 15:00:57.031673] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:20:24.009 [2024-12-01 15:00:57.031684] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:20:24.009 [2024-12-01 15:00:57.031688] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:20:24.009 [2024-12-01 15:00:57.031691] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1e8f510): datao=0, datal=4096, cccid=4 00:20:24.009 [2024-12-01 15:00:57.031696] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1edbe20) on tqpair(0x1e8f510): expected_datao=0, payload_size=4096 00:20:24.009 [2024-12-01 15:00:57.031703] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:20:24.009 [2024-12-01 15:00:57.031707] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:20:24.009 [2024-12-01 15:00:57.031714] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:24.009 [2024-12-01 15:00:57.031719] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:24.009 [2024-12-01 15:00:57.031722] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:24.009 [2024-12-01 15:00:57.031726] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1edbe20) on tqpair=0x1e8f510 00:20:24.009 [2024-12-01 15:00:57.031738] nvme_ctrlr.c:4024:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Ctrlr already in ready state 00:20:24.009 [2024-12-01 15:00:57.031803] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:24.009 [2024-12-01 15:00:57.031813] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:24.009 [2024-12-01 15:00:57.031817] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1e8f510) 00:20:24.009 [2024-12-01 15:00:57.031824] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.009 [2024-12-01 15:00:57.031831] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:24.009 [2024-12-01 15:00:57.031834] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:24.009 [2024-12-01 15:00:57.031838] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1e8f510) 00:20:24.009 [2024-12-01 15:00:57.031843] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:20:24.009 [2024-12-01 15:00:57.031871] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1edbe20, cid 4, qid 0 00:20:24.009 [2024-12-01 15:00:57.031878] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1edbf80, cid 5, qid 0 00:20:24.009 [2024-12-01 15:00:57.032013] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:20:24.009 [2024-12-01 15:00:57.032019] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:20:24.009 [2024-12-01 15:00:57.032022] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:20:24.009 [2024-12-01 15:00:57.032026] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1e8f510): datao=0, datal=1024, cccid=4 00:20:24.009 [2024-12-01 15:00:57.032030] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1edbe20) on tqpair(0x1e8f510): expected_datao=0, payload_size=1024 00:20:24.009 [2024-12-01 15:00:57.032036] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:20:24.009 [2024-12-01 15:00:57.032040] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:20:24.009 [2024-12-01 15:00:57.032045] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:24.009 [2024-12-01 15:00:57.032049] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:24.009 [2024-12-01 15:00:57.032053] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:24.009 [2024-12-01 15:00:57.032056] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1edbf80) on tqpair=0x1e8f510 00:20:24.009 [2024-12-01 15:00:57.077795] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:24.009 [2024-12-01 15:00:57.077812] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:24.009 [2024-12-01 15:00:57.077816] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:24.009 [2024-12-01 15:00:57.077832] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1edbe20) on tqpair=0x1e8f510 00:20:24.009 [2024-12-01 15:00:57.077845] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:24.009 [2024-12-01 15:00:57.077850] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:24.009 [2024-12-01 15:00:57.077853] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1e8f510) 00:20:24.009 [2024-12-01 15:00:57.077860] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:02ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.009 [2024-12-01 15:00:57.077889] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1edbe20, cid 4, qid 0 00:20:24.009 [2024-12-01 15:00:57.077985] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:20:24.009 [2024-12-01 15:00:57.077991] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:20:24.009 [2024-12-01 15:00:57.077994] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:20:24.009 [2024-12-01 15:00:57.077997] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1e8f510): datao=0, datal=3072, cccid=4 00:20:24.009 [2024-12-01 15:00:57.078001] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1edbe20) on tqpair(0x1e8f510): expected_datao=0, payload_size=3072 00:20:24.009 [2024-12-01 15:00:57.078008] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:20:24.009 [2024-12-01 15:00:57.078012] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:20:24.009 [2024-12-01 15:00:57.078020] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:24.009 [2024-12-01 15:00:57.078025] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:24.009 [2024-12-01 15:00:57.078028] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:24.009 [2024-12-01 15:00:57.078031] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1edbe20) on tqpair=0x1e8f510 00:20:24.009 [2024-12-01 15:00:57.078041] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:24.009 [2024-12-01 15:00:57.078045] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:24.009 [2024-12-01 15:00:57.078048] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1e8f510) 00:20:24.009 [2024-12-01 15:00:57.078054] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00010070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.009 [2024-12-01 15:00:57.078080] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1edbe20, cid 4, qid 0 00:20:24.009 [2024-12-01 15:00:57.078181] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:20:24.009 [2024-12-01 15:00:57.078187] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:20:24.009 [2024-12-01 15:00:57.078190] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:20:24.009 [2024-12-01 15:00:57.078193] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1e8f510): datao=0, datal=8, cccid=4 00:20:24.009 [2024-12-01 15:00:57.078197] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1edbe20) on tqpair(0x1e8f510): expected_datao=0, payload_size=8 00:20:24.009 [2024-12-01 15:00:57.078204] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:20:24.009 [2024-12-01 15:00:57.078207] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:20:24.274 ===================================================== 00:20:24.274 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2014-08.org.nvmexpress.discovery 00:20:24.274 ===================================================== 00:20:24.274 Controller Capabilities/Features 00:20:24.274 ================================ 00:20:24.274 Vendor ID: 0000 00:20:24.274 Subsystem Vendor ID: 0000 00:20:24.274 Serial Number: .................... 00:20:24.274 Model Number: ........................................ 00:20:24.274 Firmware Version: 24.01.1 00:20:24.274 Recommended Arb Burst: 0 00:20:24.274 IEEE OUI Identifier: 00 00 00 00:20:24.274 Multi-path I/O 00:20:24.274 May have multiple subsystem ports: No 00:20:24.274 May have multiple controllers: No 00:20:24.275 Associated with SR-IOV VF: No 00:20:24.275 Max Data Transfer Size: 131072 00:20:24.275 Max Number of Namespaces: 0 00:20:24.275 Max Number of I/O Queues: 1024 00:20:24.275 NVMe Specification Version (VS): 1.3 00:20:24.275 NVMe Specification Version (Identify): 1.3 00:20:24.275 Maximum Queue Entries: 128 00:20:24.275 Contiguous Queues Required: Yes 00:20:24.275 Arbitration Mechanisms Supported 00:20:24.275 Weighted Round Robin: Not Supported 00:20:24.275 Vendor Specific: Not Supported 00:20:24.275 Reset Timeout: 15000 ms 00:20:24.275 Doorbell Stride: 4 bytes 00:20:24.275 NVM Subsystem Reset: Not Supported 00:20:24.275 Command Sets Supported 00:20:24.275 NVM Command Set: Supported 00:20:24.275 Boot Partition: Not Supported 00:20:24.275 Memory Page Size Minimum: 4096 bytes 00:20:24.275 Memory Page Size Maximum: 4096 bytes 00:20:24.275 Persistent Memory Region: Not Supported 00:20:24.275 Optional Asynchronous Events Supported 00:20:24.275 Namespace Attribute Notices: Not Supported 00:20:24.275 Firmware Activation Notices: Not Supported 00:20:24.275 ANA Change Notices: Not Supported 00:20:24.275 PLE Aggregate Log Change Notices: Not Supported 00:20:24.275 LBA Status Info Alert Notices: Not Supported 00:20:24.275 EGE Aggregate Log Change Notices: Not Supported 00:20:24.275 Normal NVM Subsystem Shutdown event: Not Supported 00:20:24.275 Zone Descriptor Change Notices: Not Supported 00:20:24.275 Discovery Log Change Notices: Supported 00:20:24.275 Controller Attributes 00:20:24.275 128-bit Host Identifier: Not Supported 00:20:24.275 Non-Operational Permissive Mode: Not Supported 00:20:24.275 NVM Sets: Not Supported 00:20:24.275 Read Recovery Levels: Not Supported 00:20:24.275 Endurance Groups: Not Supported 00:20:24.275 Predictable Latency Mode: Not Supported 00:20:24.275 Traffic Based Keep ALive: Not Supported 00:20:24.275 Namespace Granularity: Not Supported 00:20:24.275 SQ Associations: Not Supported 00:20:24.275 UUID List: Not Supported 00:20:24.275 Multi-Domain Subsystem: Not Supported 00:20:24.275 Fixed Capacity Management: Not Supported 00:20:24.275 Variable Capacity Management: Not Supported 00:20:24.275 Delete Endurance Group: Not Supported 00:20:24.275 Delete NVM Set: Not Supported 00:20:24.275 Extended LBA Formats Supported: Not Supported 00:20:24.275 Flexible Data Placement Supported: Not Supported 00:20:24.275 00:20:24.275 Controller Memory Buffer Support 00:20:24.275 ================================ 00:20:24.275 Supported: No 00:20:24.275 00:20:24.275 Persistent Memory Region Support 00:20:24.275 ================================ 00:20:24.275 Supported: No 00:20:24.275 00:20:24.275 Admin Command Set Attributes 00:20:24.275 ============================ 00:20:24.275 Security Send/Receive: Not Supported 00:20:24.275 Format NVM: Not Supported 00:20:24.275 Firmware Activate/Download: Not Supported 00:20:24.275 Namespace Management: Not Supported 00:20:24.275 Device Self-Test: Not Supported 00:20:24.275 Directives: Not Supported 00:20:24.275 NVMe-MI: Not Supported 00:20:24.275 Virtualization Management: Not Supported 00:20:24.275 Doorbell Buffer Config: Not Supported 00:20:24.275 Get LBA Status Capability: Not Supported 00:20:24.275 Command & Feature Lockdown Capability: Not Supported 00:20:24.275 Abort Command Limit: 1 00:20:24.275 Async Event Request Limit: 4 00:20:24.275 Number of Firmware Slots: N/A 00:20:24.275 Firmware Slot 1 Read-Only: N/A 00:20:24.275 Fi[2024-12-01 15:00:57.119870] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:24.275 [2024-12-01 15:00:57.119888] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:24.275 [2024-12-01 15:00:57.119893] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:24.275 [2024-12-01 15:00:57.119907] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1edbe20) on tqpair=0x1e8f510 00:20:24.275 rmware Activation Without Reset: N/A 00:20:24.275 Multiple Update Detection Support: N/A 00:20:24.275 Firmware Update Granularity: No Information Provided 00:20:24.275 Per-Namespace SMART Log: No 00:20:24.275 Asymmetric Namespace Access Log Page: Not Supported 00:20:24.275 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:20:24.275 Command Effects Log Page: Not Supported 00:20:24.275 Get Log Page Extended Data: Supported 00:20:24.275 Telemetry Log Pages: Not Supported 00:20:24.275 Persistent Event Log Pages: Not Supported 00:20:24.275 Supported Log Pages Log Page: May Support 00:20:24.275 Commands Supported & Effects Log Page: Not Supported 00:20:24.275 Feature Identifiers & Effects Log Page:May Support 00:20:24.275 NVMe-MI Commands & Effects Log Page: May Support 00:20:24.275 Data Area 4 for Telemetry Log: Not Supported 00:20:24.275 Error Log Page Entries Supported: 128 00:20:24.275 Keep Alive: Not Supported 00:20:24.275 00:20:24.275 NVM Command Set Attributes 00:20:24.275 ========================== 00:20:24.275 Submission Queue Entry Size 00:20:24.275 Max: 1 00:20:24.275 Min: 1 00:20:24.275 Completion Queue Entry Size 00:20:24.275 Max: 1 00:20:24.275 Min: 1 00:20:24.275 Number of Namespaces: 0 00:20:24.275 Compare Command: Not Supported 00:20:24.275 Write Uncorrectable Command: Not Supported 00:20:24.275 Dataset Management Command: Not Supported 00:20:24.275 Write Zeroes Command: Not Supported 00:20:24.275 Set Features Save Field: Not Supported 00:20:24.275 Reservations: Not Supported 00:20:24.275 Timestamp: Not Supported 00:20:24.275 Copy: Not Supported 00:20:24.275 Volatile Write Cache: Not Present 00:20:24.275 Atomic Write Unit (Normal): 1 00:20:24.275 Atomic Write Unit (PFail): 1 00:20:24.275 Atomic Compare & Write Unit: 1 00:20:24.275 Fused Compare & Write: Supported 00:20:24.275 Scatter-Gather List 00:20:24.275 SGL Command Set: Supported 00:20:24.275 SGL Keyed: Supported 00:20:24.275 SGL Bit Bucket Descriptor: Not Supported 00:20:24.275 SGL Metadata Pointer: Not Supported 00:20:24.275 Oversized SGL: Not Supported 00:20:24.275 SGL Metadata Address: Not Supported 00:20:24.275 SGL Offset: Supported 00:20:24.275 Transport SGL Data Block: Not Supported 00:20:24.275 Replay Protected Memory Block: Not Supported 00:20:24.275 00:20:24.275 Firmware Slot Information 00:20:24.275 ========================= 00:20:24.275 Active slot: 0 00:20:24.275 00:20:24.275 00:20:24.275 Error Log 00:20:24.275 ========= 00:20:24.275 00:20:24.275 Active Namespaces 00:20:24.275 ================= 00:20:24.275 Discovery Log Page 00:20:24.275 ================== 00:20:24.275 Generation Counter: 2 00:20:24.275 Number of Records: 2 00:20:24.275 Record Format: 0 00:20:24.275 00:20:24.275 Discovery Log Entry 0 00:20:24.275 ---------------------- 00:20:24.275 Transport Type: 3 (TCP) 00:20:24.275 Address Family: 1 (IPv4) 00:20:24.275 Subsystem Type: 3 (Current Discovery Subsystem) 00:20:24.275 Entry Flags: 00:20:24.275 Duplicate Returned Information: 1 00:20:24.275 Explicit Persistent Connection Support for Discovery: 1 00:20:24.275 Transport Requirements: 00:20:24.275 Secure Channel: Not Required 00:20:24.275 Port ID: 0 (0x0000) 00:20:24.275 Controller ID: 65535 (0xffff) 00:20:24.275 Admin Max SQ Size: 128 00:20:24.275 Transport Service Identifier: 4420 00:20:24.275 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:20:24.275 Transport Address: 10.0.0.2 00:20:24.275 Discovery Log Entry 1 00:20:24.275 ---------------------- 00:20:24.275 Transport Type: 3 (TCP) 00:20:24.275 Address Family: 1 (IPv4) 00:20:24.275 Subsystem Type: 2 (NVM Subsystem) 00:20:24.275 Entry Flags: 00:20:24.275 Duplicate Returned Information: 0 00:20:24.275 Explicit Persistent Connection Support for Discovery: 0 00:20:24.275 Transport Requirements: 00:20:24.275 Secure Channel: Not Required 00:20:24.275 Port ID: 0 (0x0000) 00:20:24.275 Controller ID: 65535 (0xffff) 00:20:24.275 Admin Max SQ Size: 128 00:20:24.275 Transport Service Identifier: 4420 00:20:24.275 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:cnode1 00:20:24.275 Transport Address: 10.0.0.2 [2024-12-01 15:00:57.120049] nvme_ctrlr.c:4220:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Prepare to destruct SSD 00:20:24.275 [2024-12-01 15:00:57.120068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.275 [2024-12-01 15:00:57.120091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.275 [2024-12-01 15:00:57.120097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.275 [2024-12-01 15:00:57.120103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.275 [2024-12-01 15:00:57.120115] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:24.276 [2024-12-01 15:00:57.120120] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:24.276 [2024-12-01 15:00:57.120130] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1e8f510) 00:20:24.276 [2024-12-01 15:00:57.120138] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.276 [2024-12-01 15:00:57.120171] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1edbcc0, cid 3, qid 0 00:20:24.276 [2024-12-01 15:00:57.120265] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:24.276 [2024-12-01 15:00:57.120272] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:24.276 [2024-12-01 15:00:57.120275] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:24.276 [2024-12-01 15:00:57.120279] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1edbcc0) on tqpair=0x1e8f510 00:20:24.276 [2024-12-01 15:00:57.120287] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:24.276 [2024-12-01 15:00:57.120291] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:24.276 [2024-12-01 15:00:57.120295] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1e8f510) 00:20:24.276 [2024-12-01 15:00:57.120301] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.276 [2024-12-01 15:00:57.120339] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1edbcc0, cid 3, qid 0 00:20:24.276 [2024-12-01 15:00:57.120436] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:24.276 [2024-12-01 15:00:57.120442] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:24.276 [2024-12-01 15:00:57.120445] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:24.276 [2024-12-01 15:00:57.120449] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1edbcc0) on tqpair=0x1e8f510 00:20:24.276 [2024-12-01 15:00:57.120455] nvme_ctrlr.c:1070:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] RTD3E = 0 us 00:20:24.276 [2024-12-01 15:00:57.120459] nvme_ctrlr.c:1073:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown timeout = 10000 ms 00:20:24.276 [2024-12-01 15:00:57.120469] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:24.276 [2024-12-01 15:00:57.120473] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:24.276 [2024-12-01 15:00:57.120477] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1e8f510) 00:20:24.276 [2024-12-01 15:00:57.120483] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.276 [2024-12-01 15:00:57.120501] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1edbcc0, cid 3, qid 0 00:20:24.276 [2024-12-01 15:00:57.120564] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:24.276 [2024-12-01 15:00:57.120591] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:24.276 [2024-12-01 15:00:57.120595] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:24.276 [2024-12-01 15:00:57.120599] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1edbcc0) on tqpair=0x1e8f510 00:20:24.276 [2024-12-01 15:00:57.120610] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:24.276 [2024-12-01 15:00:57.120614] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:24.276 [2024-12-01 15:00:57.120618] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1e8f510) 00:20:24.276 [2024-12-01 15:00:57.120625] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.276 [2024-12-01 15:00:57.120659] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1edbcc0, cid 3, qid 0 00:20:24.276 [2024-12-01 15:00:57.120737] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:24.276 [2024-12-01 15:00:57.120750] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:24.276 [2024-12-01 15:00:57.124814] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:24.276 [2024-12-01 15:00:57.124820] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1edbcc0) on tqpair=0x1e8f510 00:20:24.276 [2024-12-01 15:00:57.124841] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:24.276 [2024-12-01 15:00:57.124846] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:24.276 [2024-12-01 15:00:57.124849] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1e8f510) 00:20:24.276 [2024-12-01 15:00:57.124857] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.276 [2024-12-01 15:00:57.124881] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1edbcc0, cid 3, qid 0 00:20:24.276 [2024-12-01 15:00:57.124967] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:24.276 [2024-12-01 15:00:57.124974] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:24.276 [2024-12-01 15:00:57.124977] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:24.276 [2024-12-01 15:00:57.124981] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1edbcc0) on tqpair=0x1e8f510 00:20:24.276 [2024-12-01 15:00:57.124989] nvme_ctrlr.c:1192:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown complete in 4 milliseconds 00:20:24.276 00:20:24.276 15:00:57 -- host/identify.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -L all 00:20:24.276 [2024-12-01 15:00:57.158450] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:20:24.276 [2024-12-01 15:00:57.158517] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid93640 ] 00:20:24.276 [2024-12-01 15:00:57.293691] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to connect adminq (no timeout) 00:20:24.276 [2024-12-01 15:00:57.293781] nvme_tcp.c:2244:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:20:24.276 [2024-12-01 15:00:57.293788] nvme_tcp.c:2248:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:20:24.276 [2024-12-01 15:00:57.293798] nvme_tcp.c:2266:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:20:24.276 [2024-12-01 15:00:57.293806] sock.c: 334:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:20:24.276 [2024-12-01 15:00:57.293899] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for connect adminq (no timeout) 00:20:24.276 [2024-12-01 15:00:57.293964] nvme_tcp.c:1487:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x12a0510 0 00:20:24.276 [2024-12-01 15:00:57.300770] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:20:24.276 [2024-12-01 15:00:57.300788] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:20:24.276 [2024-12-01 15:00:57.300793] nvme_tcp.c:1533:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:20:24.276 [2024-12-01 15:00:57.300796] nvme_tcp.c:1534:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:20:24.276 [2024-12-01 15:00:57.300839] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:24.276 [2024-12-01 15:00:57.300845] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:24.276 [2024-12-01 15:00:57.300849] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x12a0510) 00:20:24.276 [2024-12-01 15:00:57.300859] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:20:24.276 [2024-12-01 15:00:57.300887] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12ec8a0, cid 0, qid 0 00:20:24.276 [2024-12-01 15:00:57.308771] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:24.276 [2024-12-01 15:00:57.308787] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:24.276 [2024-12-01 15:00:57.308791] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:24.276 [2024-12-01 15:00:57.308805] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x12ec8a0) on tqpair=0x12a0510 00:20:24.276 [2024-12-01 15:00:57.308814] nvme_fabric.c: 620:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:20:24.276 [2024-12-01 15:00:57.308820] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs (no timeout) 00:20:24.276 [2024-12-01 15:00:57.308826] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs wait for vs (no timeout) 00:20:24.276 [2024-12-01 15:00:57.308839] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:24.276 [2024-12-01 15:00:57.308843] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:24.276 [2024-12-01 15:00:57.308847] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x12a0510) 00:20:24.276 [2024-12-01 15:00:57.308855] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.276 [2024-12-01 15:00:57.308879] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12ec8a0, cid 0, qid 0 00:20:24.276 [2024-12-01 15:00:57.308954] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:24.276 [2024-12-01 15:00:57.308960] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:24.276 [2024-12-01 15:00:57.308963] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:24.276 [2024-12-01 15:00:57.308966] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x12ec8a0) on tqpair=0x12a0510 00:20:24.276 [2024-12-01 15:00:57.308972] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap (no timeout) 00:20:24.276 [2024-12-01 15:00:57.308978] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap wait for cap (no timeout) 00:20:24.276 [2024-12-01 15:00:57.308985] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:24.276 [2024-12-01 15:00:57.308989] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:24.276 [2024-12-01 15:00:57.308992] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x12a0510) 00:20:24.276 [2024-12-01 15:00:57.308999] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.276 [2024-12-01 15:00:57.309015] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12ec8a0, cid 0, qid 0 00:20:24.276 [2024-12-01 15:00:57.309085] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:24.276 [2024-12-01 15:00:57.309091] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:24.276 [2024-12-01 15:00:57.309094] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:24.276 [2024-12-01 15:00:57.309097] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x12ec8a0) on tqpair=0x12a0510 00:20:24.276 [2024-12-01 15:00:57.309103] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en (no timeout) 00:20:24.277 [2024-12-01 15:00:57.309110] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en wait for cc (timeout 15000 ms) 00:20:24.277 [2024-12-01 15:00:57.309117] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:24.277 [2024-12-01 15:00:57.309120] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:24.277 [2024-12-01 15:00:57.309123] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x12a0510) 00:20:24.277 [2024-12-01 15:00:57.309130] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.277 [2024-12-01 15:00:57.309145] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12ec8a0, cid 0, qid 0 00:20:24.277 [2024-12-01 15:00:57.309202] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:24.277 [2024-12-01 15:00:57.309208] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:24.277 [2024-12-01 15:00:57.309211] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:24.277 [2024-12-01 15:00:57.309214] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x12ec8a0) on tqpair=0x12a0510 00:20:24.277 [2024-12-01 15:00:57.309220] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:20:24.277 [2024-12-01 15:00:57.309229] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:24.277 [2024-12-01 15:00:57.309233] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:24.277 [2024-12-01 15:00:57.309236] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x12a0510) 00:20:24.277 [2024-12-01 15:00:57.309243] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.277 [2024-12-01 15:00:57.309258] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12ec8a0, cid 0, qid 0 00:20:24.277 [2024-12-01 15:00:57.309331] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:24.277 [2024-12-01 15:00:57.309338] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:24.277 [2024-12-01 15:00:57.309342] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:24.277 [2024-12-01 15:00:57.309345] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x12ec8a0) on tqpair=0x12a0510 00:20:24.277 [2024-12-01 15:00:57.309350] nvme_ctrlr.c:3737:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 0 && CSTS.RDY = 0 00:20:24.277 [2024-12-01 15:00:57.309355] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to controller is disabled (timeout 15000 ms) 00:20:24.277 [2024-12-01 15:00:57.309362] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:20:24.277 [2024-12-01 15:00:57.309467] nvme_ctrlr.c:3930:nvme_ctrlr_process_init: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Setting CC.EN = 1 00:20:24.277 [2024-12-01 15:00:57.309471] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:20:24.277 [2024-12-01 15:00:57.309479] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:24.277 [2024-12-01 15:00:57.309483] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:24.277 [2024-12-01 15:00:57.309486] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x12a0510) 00:20:24.277 [2024-12-01 15:00:57.309493] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.277 [2024-12-01 15:00:57.309510] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12ec8a0, cid 0, qid 0 00:20:24.277 [2024-12-01 15:00:57.309571] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:24.277 [2024-12-01 15:00:57.309582] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:24.277 [2024-12-01 15:00:57.309586] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:24.277 [2024-12-01 15:00:57.309590] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x12ec8a0) on tqpair=0x12a0510 00:20:24.277 [2024-12-01 15:00:57.309595] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:20:24.277 [2024-12-01 15:00:57.309605] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:24.277 [2024-12-01 15:00:57.309609] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:24.277 [2024-12-01 15:00:57.309613] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x12a0510) 00:20:24.277 [2024-12-01 15:00:57.309620] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.277 [2024-12-01 15:00:57.309644] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12ec8a0, cid 0, qid 0 00:20:24.277 [2024-12-01 15:00:57.309697] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:24.277 [2024-12-01 15:00:57.309703] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:24.277 [2024-12-01 15:00:57.309721] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:24.277 [2024-12-01 15:00:57.309724] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x12ec8a0) on tqpair=0x12a0510 00:20:24.277 [2024-12-01 15:00:57.309729] nvme_ctrlr.c:3772:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:20:24.277 [2024-12-01 15:00:57.309734] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to reset admin queue (timeout 30000 ms) 00:20:24.277 [2024-12-01 15:00:57.309740] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller (no timeout) 00:20:24.277 [2024-12-01 15:00:57.309755] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify controller (timeout 30000 ms) 00:20:24.277 [2024-12-01 15:00:57.309763] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:24.277 [2024-12-01 15:00:57.309778] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:24.277 [2024-12-01 15:00:57.309782] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x12a0510) 00:20:24.277 [2024-12-01 15:00:57.309789] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.277 [2024-12-01 15:00:57.309808] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12ec8a0, cid 0, qid 0 00:20:24.277 [2024-12-01 15:00:57.309911] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:20:24.277 [2024-12-01 15:00:57.309917] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:20:24.277 [2024-12-01 15:00:57.309920] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:20:24.277 [2024-12-01 15:00:57.309923] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x12a0510): datao=0, datal=4096, cccid=0 00:20:24.277 [2024-12-01 15:00:57.309927] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x12ec8a0) on tqpair(0x12a0510): expected_datao=0, payload_size=4096 00:20:24.277 [2024-12-01 15:00:57.309934] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:20:24.277 [2024-12-01 15:00:57.309938] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:20:24.277 [2024-12-01 15:00:57.309945] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:24.277 [2024-12-01 15:00:57.309949] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:24.277 [2024-12-01 15:00:57.309953] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:24.277 [2024-12-01 15:00:57.309956] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x12ec8a0) on tqpair=0x12a0510 00:20:24.277 [2024-12-01 15:00:57.309964] nvme_ctrlr.c:1972:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_xfer_size 4294967295 00:20:24.277 [2024-12-01 15:00:57.309968] nvme_ctrlr.c:1976:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] MDTS max_xfer_size 131072 00:20:24.277 [2024-12-01 15:00:57.309972] nvme_ctrlr.c:1979:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CNTLID 0x0001 00:20:24.277 [2024-12-01 15:00:57.309977] nvme_ctrlr.c:2003:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_sges 16 00:20:24.277 [2024-12-01 15:00:57.309981] nvme_ctrlr.c:2018:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] fuses compare and write: 1 00:20:24.277 [2024-12-01 15:00:57.309985] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to configure AER (timeout 30000 ms) 00:20:24.277 [2024-12-01 15:00:57.309997] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for configure aer (timeout 30000 ms) 00:20:24.277 [2024-12-01 15:00:57.310004] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:24.278 [2024-12-01 15:00:57.310007] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:24.278 [2024-12-01 15:00:57.310011] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x12a0510) 00:20:24.278 [2024-12-01 15:00:57.310018] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:20:24.278 [2024-12-01 15:00:57.310035] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12ec8a0, cid 0, qid 0 00:20:24.278 [2024-12-01 15:00:57.310100] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:24.278 [2024-12-01 15:00:57.310106] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:24.278 [2024-12-01 15:00:57.310109] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:24.278 [2024-12-01 15:00:57.310113] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x12ec8a0) on tqpair=0x12a0510 00:20:24.278 [2024-12-01 15:00:57.310120] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:24.278 [2024-12-01 15:00:57.310124] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:24.278 [2024-12-01 15:00:57.310127] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x12a0510) 00:20:24.278 [2024-12-01 15:00:57.310133] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:20:24.278 [2024-12-01 15:00:57.310138] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:24.278 [2024-12-01 15:00:57.310141] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:24.278 [2024-12-01 15:00:57.310144] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x12a0510) 00:20:24.278 [2024-12-01 15:00:57.310149] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:20:24.278 [2024-12-01 15:00:57.310154] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:24.278 [2024-12-01 15:00:57.310158] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:24.278 [2024-12-01 15:00:57.310160] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x12a0510) 00:20:24.278 [2024-12-01 15:00:57.310165] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:20:24.278 [2024-12-01 15:00:57.310170] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:24.278 [2024-12-01 15:00:57.310173] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:24.278 [2024-12-01 15:00:57.310176] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x12a0510) 00:20:24.278 [2024-12-01 15:00:57.310181] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:20:24.278 [2024-12-01 15:00:57.310185] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set keep alive timeout (timeout 30000 ms) 00:20:24.278 [2024-12-01 15:00:57.310196] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:20:24.278 [2024-12-01 15:00:57.310202] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:24.278 [2024-12-01 15:00:57.310206] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:24.278 [2024-12-01 15:00:57.310209] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x12a0510) 00:20:24.278 [2024-12-01 15:00:57.310214] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.278 [2024-12-01 15:00:57.310232] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12ec8a0, cid 0, qid 0 00:20:24.278 [2024-12-01 15:00:57.310238] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12eca00, cid 1, qid 0 00:20:24.278 [2024-12-01 15:00:57.310243] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12ecb60, cid 2, qid 0 00:20:24.278 [2024-12-01 15:00:57.310247] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12eccc0, cid 3, qid 0 00:20:24.278 [2024-12-01 15:00:57.310251] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12ece20, cid 4, qid 0 00:20:24.278 [2024-12-01 15:00:57.310338] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:24.278 [2024-12-01 15:00:57.310344] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:24.278 [2024-12-01 15:00:57.310347] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:24.278 [2024-12-01 15:00:57.310350] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x12ece20) on tqpair=0x12a0510 00:20:24.278 [2024-12-01 15:00:57.310356] nvme_ctrlr.c:2890:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Sending keep alive every 5000000 us 00:20:24.278 [2024-12-01 15:00:57.310360] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller iocs specific (timeout 30000 ms) 00:20:24.278 [2024-12-01 15:00:57.310367] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set number of queues (timeout 30000 ms) 00:20:24.278 [2024-12-01 15:00:57.310377] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set number of queues (timeout 30000 ms) 00:20:24.278 [2024-12-01 15:00:57.310383] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:24.278 [2024-12-01 15:00:57.310387] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:24.278 [2024-12-01 15:00:57.310390] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x12a0510) 00:20:24.278 [2024-12-01 15:00:57.310397] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:4 cdw10:00000007 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:20:24.278 [2024-12-01 15:00:57.310413] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12ece20, cid 4, qid 0 00:20:24.278 [2024-12-01 15:00:57.310476] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:24.278 [2024-12-01 15:00:57.310481] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:24.278 [2024-12-01 15:00:57.310484] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:24.278 [2024-12-01 15:00:57.310488] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x12ece20) on tqpair=0x12a0510 00:20:24.278 [2024-12-01 15:00:57.310538] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify active ns (timeout 30000 ms) 00:20:24.278 [2024-12-01 15:00:57.310548] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify active ns (timeout 30000 ms) 00:20:24.278 [2024-12-01 15:00:57.310555] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:24.278 [2024-12-01 15:00:57.310559] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:24.278 [2024-12-01 15:00:57.310563] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x12a0510) 00:20:24.278 [2024-12-01 15:00:57.310569] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.278 [2024-12-01 15:00:57.310585] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12ece20, cid 4, qid 0 00:20:24.278 [2024-12-01 15:00:57.310653] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:20:24.278 [2024-12-01 15:00:57.310659] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:20:24.278 [2024-12-01 15:00:57.310662] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:20:24.278 [2024-12-01 15:00:57.310665] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x12a0510): datao=0, datal=4096, cccid=4 00:20:24.278 [2024-12-01 15:00:57.310669] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x12ece20) on tqpair(0x12a0510): expected_datao=0, payload_size=4096 00:20:24.278 [2024-12-01 15:00:57.310676] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:20:24.278 [2024-12-01 15:00:57.310679] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:20:24.278 [2024-12-01 15:00:57.310686] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:24.278 [2024-12-01 15:00:57.310691] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:24.278 [2024-12-01 15:00:57.310694] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:24.278 [2024-12-01 15:00:57.310697] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x12ece20) on tqpair=0x12a0510 00:20:24.278 [2024-12-01 15:00:57.310711] nvme_ctrlr.c:4556:spdk_nvme_ctrlr_get_ns: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Namespace 1 was added 00:20:24.278 [2024-12-01 15:00:57.310721] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns (timeout 30000 ms) 00:20:24.278 [2024-12-01 15:00:57.310730] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify ns (timeout 30000 ms) 00:20:24.278 [2024-12-01 15:00:57.310737] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:24.278 [2024-12-01 15:00:57.310740] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:24.278 [2024-12-01 15:00:57.310743] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x12a0510) 00:20:24.278 [2024-12-01 15:00:57.310761] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.278 [2024-12-01 15:00:57.310781] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12ece20, cid 4, qid 0 00:20:24.278 [2024-12-01 15:00:57.310868] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:20:24.279 [2024-12-01 15:00:57.310873] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:20:24.279 [2024-12-01 15:00:57.310876] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:20:24.279 [2024-12-01 15:00:57.310880] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x12a0510): datao=0, datal=4096, cccid=4 00:20:24.279 [2024-12-01 15:00:57.310884] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x12ece20) on tqpair(0x12a0510): expected_datao=0, payload_size=4096 00:20:24.279 [2024-12-01 15:00:57.310890] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:20:24.279 [2024-12-01 15:00:57.310893] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:20:24.279 [2024-12-01 15:00:57.310900] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:24.279 [2024-12-01 15:00:57.310905] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:24.279 [2024-12-01 15:00:57.310908] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:24.279 [2024-12-01 15:00:57.310911] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x12ece20) on tqpair=0x12a0510 00:20:24.279 [2024-12-01 15:00:57.310927] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:20:24.279 [2024-12-01 15:00:57.310936] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:20:24.279 [2024-12-01 15:00:57.310945] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:24.279 [2024-12-01 15:00:57.310949] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:24.279 [2024-12-01 15:00:57.310952] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x12a0510) 00:20:24.279 [2024-12-01 15:00:57.310958] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.279 [2024-12-01 15:00:57.310975] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12ece20, cid 4, qid 0 00:20:24.279 [2024-12-01 15:00:57.311044] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:20:24.279 [2024-12-01 15:00:57.311050] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:20:24.279 [2024-12-01 15:00:57.311053] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:20:24.279 [2024-12-01 15:00:57.311056] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x12a0510): datao=0, datal=4096, cccid=4 00:20:24.279 [2024-12-01 15:00:57.311060] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x12ece20) on tqpair(0x12a0510): expected_datao=0, payload_size=4096 00:20:24.279 [2024-12-01 15:00:57.311067] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:20:24.279 [2024-12-01 15:00:57.311070] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:20:24.279 [2024-12-01 15:00:57.311076] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:24.279 [2024-12-01 15:00:57.311081] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:24.279 [2024-12-01 15:00:57.311084] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:24.279 [2024-12-01 15:00:57.311087] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x12ece20) on tqpair=0x12a0510 00:20:24.279 [2024-12-01 15:00:57.311095] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns iocs specific (timeout 30000 ms) 00:20:24.279 [2024-12-01 15:00:57.311103] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported log pages (timeout 30000 ms) 00:20:24.279 [2024-12-01 15:00:57.311123] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported features (timeout 30000 ms) 00:20:24.279 [2024-12-01 15:00:57.311129] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set doorbell buffer config (timeout 30000 ms) 00:20:24.279 [2024-12-01 15:00:57.311134] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set host ID (timeout 30000 ms) 00:20:24.279 [2024-12-01 15:00:57.311140] nvme_ctrlr.c:2978:nvme_ctrlr_set_host_id: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] NVMe-oF transport - not sending Set Features - Host ID 00:20:24.279 [2024-12-01 15:00:57.311144] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to transport ready (timeout 30000 ms) 00:20:24.279 [2024-12-01 15:00:57.311149] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to ready (no timeout) 00:20:24.279 [2024-12-01 15:00:57.311162] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:24.279 [2024-12-01 15:00:57.311166] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:24.279 [2024-12-01 15:00:57.311169] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x12a0510) 00:20:24.279 [2024-12-01 15:00:57.311175] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:4 cdw10:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.279 [2024-12-01 15:00:57.311181] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:24.279 [2024-12-01 15:00:57.311185] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:24.279 [2024-12-01 15:00:57.311188] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x12a0510) 00:20:24.279 [2024-12-01 15:00:57.311193] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:20:24.279 [2024-12-01 15:00:57.311214] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12ece20, cid 4, qid 0 00:20:24.279 [2024-12-01 15:00:57.311221] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12ecf80, cid 5, qid 0 00:20:24.279 [2024-12-01 15:00:57.311296] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:24.279 [2024-12-01 15:00:57.311302] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:24.279 [2024-12-01 15:00:57.311305] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:24.279 [2024-12-01 15:00:57.311308] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x12ece20) on tqpair=0x12a0510 00:20:24.279 [2024-12-01 15:00:57.311315] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:24.279 [2024-12-01 15:00:57.311320] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:24.279 [2024-12-01 15:00:57.311323] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:24.279 [2024-12-01 15:00:57.311326] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x12ecf80) on tqpair=0x12a0510 00:20:24.279 [2024-12-01 15:00:57.311336] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:24.279 [2024-12-01 15:00:57.311340] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:24.279 [2024-12-01 15:00:57.311343] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x12a0510) 00:20:24.279 [2024-12-01 15:00:57.311349] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:5 cdw10:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.279 [2024-12-01 15:00:57.311364] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12ecf80, cid 5, qid 0 00:20:24.279 [2024-12-01 15:00:57.311427] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:24.279 [2024-12-01 15:00:57.311433] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:24.279 [2024-12-01 15:00:57.311436] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:24.279 [2024-12-01 15:00:57.311440] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x12ecf80) on tqpair=0x12a0510 00:20:24.279 [2024-12-01 15:00:57.311449] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:24.279 [2024-12-01 15:00:57.311453] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:24.279 [2024-12-01 15:00:57.311457] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x12a0510) 00:20:24.279 [2024-12-01 15:00:57.311463] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:5 cdw10:00000004 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.279 [2024-12-01 15:00:57.311477] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12ecf80, cid 5, qid 0 00:20:24.279 [2024-12-01 15:00:57.311539] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:24.279 [2024-12-01 15:00:57.311545] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:24.279 [2024-12-01 15:00:57.311548] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:24.279 [2024-12-01 15:00:57.311551] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x12ecf80) on tqpair=0x12a0510 00:20:24.279 [2024-12-01 15:00:57.311561] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:24.279 [2024-12-01 15:00:57.311565] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:24.279 [2024-12-01 15:00:57.311568] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x12a0510) 00:20:24.279 [2024-12-01 15:00:57.311574] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:5 cdw10:00000007 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.279 [2024-12-01 15:00:57.311589] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12ecf80, cid 5, qid 0 00:20:24.279 [2024-12-01 15:00:57.311641] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:24.279 [2024-12-01 15:00:57.311647] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:24.279 [2024-12-01 15:00:57.311650] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:24.279 [2024-12-01 15:00:57.311653] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x12ecf80) on tqpair=0x12a0510 00:20:24.279 [2024-12-01 15:00:57.311665] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:24.279 [2024-12-01 15:00:57.311669] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:24.279 [2024-12-01 15:00:57.311673] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x12a0510) 00:20:24.279 [2024-12-01 15:00:57.311679] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.279 [2024-12-01 15:00:57.311685] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:24.279 [2024-12-01 15:00:57.311688] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:24.280 [2024-12-01 15:00:57.311691] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x12a0510) 00:20:24.280 [2024-12-01 15:00:57.311697] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:007f0002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.280 [2024-12-01 15:00:57.311703] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:24.280 [2024-12-01 15:00:57.311706] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:24.280 [2024-12-01 15:00:57.311709] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=6 on tqpair(0x12a0510) 00:20:24.280 [2024-12-01 15:00:57.311717] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:ffffffff cdw10:007f0003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.280 [2024-12-01 15:00:57.311723] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:24.280 [2024-12-01 15:00:57.311726] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:24.280 [2024-12-01 15:00:57.311729] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x12a0510) 00:20:24.280 [2024-12-01 15:00:57.311747] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.280 [2024-12-01 15:00:57.311775] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12ecf80, cid 5, qid 0 00:20:24.280 [2024-12-01 15:00:57.311782] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12ece20, cid 4, qid 0 00:20:24.280 [2024-12-01 15:00:57.311787] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12ed0e0, cid 6, qid 0 00:20:24.280 [2024-12-01 15:00:57.311791] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12ed240, cid 7, qid 0 00:20:24.280 [2024-12-01 15:00:57.311925] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:20:24.280 [2024-12-01 15:00:57.311931] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:20:24.280 [2024-12-01 15:00:57.311934] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:20:24.280 [2024-12-01 15:00:57.311937] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x12a0510): datao=0, datal=8192, cccid=5 00:20:24.280 [2024-12-01 15:00:57.311941] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x12ecf80) on tqpair(0x12a0510): expected_datao=0, payload_size=8192 00:20:24.280 [2024-12-01 15:00:57.311955] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:20:24.280 [2024-12-01 15:00:57.311959] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:20:24.280 [2024-12-01 15:00:57.311964] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:20:24.280 [2024-12-01 15:00:57.311968] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:20:24.280 [2024-12-01 15:00:57.311971] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:20:24.280 [2024-12-01 15:00:57.311974] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x12a0510): datao=0, datal=512, cccid=4 00:20:24.280 [2024-12-01 15:00:57.311978] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x12ece20) on tqpair(0x12a0510): expected_datao=0, payload_size=512 00:20:24.280 [2024-12-01 15:00:57.311984] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:20:24.280 [2024-12-01 15:00:57.311987] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:20:24.280 [2024-12-01 15:00:57.311991] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:20:24.280 [2024-12-01 15:00:57.311996] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:20:24.280 [2024-12-01 15:00:57.311999] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:20:24.280 [2024-12-01 15:00:57.312002] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x12a0510): datao=0, datal=512, cccid=6 00:20:24.280 [2024-12-01 15:00:57.312005] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x12ed0e0) on tqpair(0x12a0510): expected_datao=0, payload_size=512 00:20:24.280 [2024-12-01 15:00:57.312011] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:20:24.280 [2024-12-01 15:00:57.312014] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:20:24.280 [2024-12-01 15:00:57.312019] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:20:24.280 [2024-12-01 15:00:57.312023] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:20:24.280 [2024-12-01 15:00:57.312026] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:20:24.280 [2024-12-01 15:00:57.312029] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x12a0510): datao=0, datal=4096, cccid=7 00:20:24.280 [2024-12-01 15:00:57.312033] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x12ed240) on tqpair(0x12a0510): expected_datao=0, payload_size=4096 00:20:24.280 [2024-12-01 15:00:57.312039] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:20:24.280 [2024-12-01 15:00:57.312042] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:20:24.280 [2024-12-01 15:00:57.312049] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:24.280 [2024-12-01 15:00:57.312053] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:24.280 [2024-12-01 15:00:57.312056] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:24.280 [2024-12-01 15:00:57.312059] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x12ecf80) on tqpair=0x12a0510 00:20:24.280 [2024-12-01 15:00:57.312075] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:24.280 [2024-12-01 15:00:57.312081] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:24.280 [2024-12-01 15:00:57.312084] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:24.280 [2024-12-01 15:00:57.312087] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x12ece20) on tqpair=0x12a0510 00:20:24.280 [2024-12-01 15:00:57.312097] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:24.280 [2024-12-01 15:00:57.312102] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:24.280 [2024-12-01 15:00:57.312105] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:24.280 [2024-12-01 15:00:57.312108] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x12ed0e0) on tqpair=0x12a0510 00:20:24.280 [2024-12-01 15:00:57.312115] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:24.280 [2024-12-01 15:00:57.312120] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:24.280 [2024-12-01 15:00:57.312123] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:24.280 [2024-12-01 15:00:57.312126] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x12ed240) on tqpair=0x12a0510 00:20:24.280 ===================================================== 00:20:24.280 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:20:24.280 ===================================================== 00:20:24.280 Controller Capabilities/Features 00:20:24.280 ================================ 00:20:24.280 Vendor ID: 8086 00:20:24.280 Subsystem Vendor ID: 8086 00:20:24.280 Serial Number: SPDK00000000000001 00:20:24.280 Model Number: SPDK bdev Controller 00:20:24.280 Firmware Version: 24.01.1 00:20:24.280 Recommended Arb Burst: 6 00:20:24.280 IEEE OUI Identifier: e4 d2 5c 00:20:24.280 Multi-path I/O 00:20:24.280 May have multiple subsystem ports: Yes 00:20:24.280 May have multiple controllers: Yes 00:20:24.280 Associated with SR-IOV VF: No 00:20:24.280 Max Data Transfer Size: 131072 00:20:24.280 Max Number of Namespaces: 32 00:20:24.280 Max Number of I/O Queues: 127 00:20:24.280 NVMe Specification Version (VS): 1.3 00:20:24.280 NVMe Specification Version (Identify): 1.3 00:20:24.280 Maximum Queue Entries: 128 00:20:24.280 Contiguous Queues Required: Yes 00:20:24.280 Arbitration Mechanisms Supported 00:20:24.280 Weighted Round Robin: Not Supported 00:20:24.280 Vendor Specific: Not Supported 00:20:24.281 Reset Timeout: 15000 ms 00:20:24.281 Doorbell Stride: 4 bytes 00:20:24.281 NVM Subsystem Reset: Not Supported 00:20:24.281 Command Sets Supported 00:20:24.281 NVM Command Set: Supported 00:20:24.281 Boot Partition: Not Supported 00:20:24.281 Memory Page Size Minimum: 4096 bytes 00:20:24.281 Memory Page Size Maximum: 4096 bytes 00:20:24.281 Persistent Memory Region: Not Supported 00:20:24.281 Optional Asynchronous Events Supported 00:20:24.281 Namespace Attribute Notices: Supported 00:20:24.281 Firmware Activation Notices: Not Supported 00:20:24.281 ANA Change Notices: Not Supported 00:20:24.281 PLE Aggregate Log Change Notices: Not Supported 00:20:24.281 LBA Status Info Alert Notices: Not Supported 00:20:24.281 EGE Aggregate Log Change Notices: Not Supported 00:20:24.281 Normal NVM Subsystem Shutdown event: Not Supported 00:20:24.281 Zone Descriptor Change Notices: Not Supported 00:20:24.281 Discovery Log Change Notices: Not Supported 00:20:24.281 Controller Attributes 00:20:24.281 128-bit Host Identifier: Supported 00:20:24.281 Non-Operational Permissive Mode: Not Supported 00:20:24.281 NVM Sets: Not Supported 00:20:24.281 Read Recovery Levels: Not Supported 00:20:24.281 Endurance Groups: Not Supported 00:20:24.281 Predictable Latency Mode: Not Supported 00:20:24.281 Traffic Based Keep ALive: Not Supported 00:20:24.281 Namespace Granularity: Not Supported 00:20:24.281 SQ Associations: Not Supported 00:20:24.281 UUID List: Not Supported 00:20:24.281 Multi-Domain Subsystem: Not Supported 00:20:24.281 Fixed Capacity Management: Not Supported 00:20:24.281 Variable Capacity Management: Not Supported 00:20:24.281 Delete Endurance Group: Not Supported 00:20:24.281 Delete NVM Set: Not Supported 00:20:24.281 Extended LBA Formats Supported: Not Supported 00:20:24.281 Flexible Data Placement Supported: Not Supported 00:20:24.281 00:20:24.281 Controller Memory Buffer Support 00:20:24.281 ================================ 00:20:24.281 Supported: No 00:20:24.281 00:20:24.281 Persistent Memory Region Support 00:20:24.281 ================================ 00:20:24.281 Supported: No 00:20:24.281 00:20:24.281 Admin Command Set Attributes 00:20:24.281 ============================ 00:20:24.281 Security Send/Receive: Not Supported 00:20:24.281 Format NVM: Not Supported 00:20:24.281 Firmware Activate/Download: Not Supported 00:20:24.281 Namespace Management: Not Supported 00:20:24.281 Device Self-Test: Not Supported 00:20:24.281 Directives: Not Supported 00:20:24.281 NVMe-MI: Not Supported 00:20:24.281 Virtualization Management: Not Supported 00:20:24.281 Doorbell Buffer Config: Not Supported 00:20:24.281 Get LBA Status Capability: Not Supported 00:20:24.281 Command & Feature Lockdown Capability: Not Supported 00:20:24.281 Abort Command Limit: 4 00:20:24.281 Async Event Request Limit: 4 00:20:24.281 Number of Firmware Slots: N/A 00:20:24.281 Firmware Slot 1 Read-Only: N/A 00:20:24.281 Firmware Activation Without Reset: N/A 00:20:24.281 Multiple Update Detection Support: N/A 00:20:24.281 Firmware Update Granularity: No Information Provided 00:20:24.281 Per-Namespace SMART Log: No 00:20:24.281 Asymmetric Namespace Access Log Page: Not Supported 00:20:24.281 Subsystem NQN: nqn.2016-06.io.spdk:cnode1 00:20:24.281 Command Effects Log Page: Supported 00:20:24.281 Get Log Page Extended Data: Supported 00:20:24.281 Telemetry Log Pages: Not Supported 00:20:24.281 Persistent Event Log Pages: Not Supported 00:20:24.281 Supported Log Pages Log Page: May Support 00:20:24.281 Commands Supported & Effects Log Page: Not Supported 00:20:24.281 Feature Identifiers & Effects Log Page:May Support 00:20:24.281 NVMe-MI Commands & Effects Log Page: May Support 00:20:24.281 Data Area 4 for Telemetry Log: Not Supported 00:20:24.281 Error Log Page Entries Supported: 128 00:20:24.281 Keep Alive: Supported 00:20:24.281 Keep Alive Granularity: 10000 ms 00:20:24.281 00:20:24.281 NVM Command Set Attributes 00:20:24.281 ========================== 00:20:24.281 Submission Queue Entry Size 00:20:24.281 Max: 64 00:20:24.281 Min: 64 00:20:24.281 Completion Queue Entry Size 00:20:24.281 Max: 16 00:20:24.281 Min: 16 00:20:24.281 Number of Namespaces: 32 00:20:24.281 Compare Command: Supported 00:20:24.281 Write Uncorrectable Command: Not Supported 00:20:24.281 Dataset Management Command: Supported 00:20:24.281 Write Zeroes Command: Supported 00:20:24.281 Set Features Save Field: Not Supported 00:20:24.281 Reservations: Supported 00:20:24.281 Timestamp: Not Supported 00:20:24.281 Copy: Supported 00:20:24.281 Volatile Write Cache: Present 00:20:24.281 Atomic Write Unit (Normal): 1 00:20:24.281 Atomic Write Unit (PFail): 1 00:20:24.281 Atomic Compare & Write Unit: 1 00:20:24.281 Fused Compare & Write: Supported 00:20:24.281 Scatter-Gather List 00:20:24.281 SGL Command Set: Supported 00:20:24.281 SGL Keyed: Supported 00:20:24.281 SGL Bit Bucket Descriptor: Not Supported 00:20:24.281 SGL Metadata Pointer: Not Supported 00:20:24.281 Oversized SGL: Not Supported 00:20:24.281 SGL Metadata Address: Not Supported 00:20:24.281 SGL Offset: Supported 00:20:24.281 Transport SGL Data Block: Not Supported 00:20:24.281 Replay Protected Memory Block: Not Supported 00:20:24.281 00:20:24.281 Firmware Slot Information 00:20:24.281 ========================= 00:20:24.281 Active slot: 1 00:20:24.281 Slot 1 Firmware Revision: 24.01.1 00:20:24.281 00:20:24.281 00:20:24.281 Commands Supported and Effects 00:20:24.281 ============================== 00:20:24.281 Admin Commands 00:20:24.281 -------------- 00:20:24.281 Get Log Page (02h): Supported 00:20:24.281 Identify (06h): Supported 00:20:24.281 Abort (08h): Supported 00:20:24.281 Set Features (09h): Supported 00:20:24.281 Get Features (0Ah): Supported 00:20:24.281 Asynchronous Event Request (0Ch): Supported 00:20:24.281 Keep Alive (18h): Supported 00:20:24.281 I/O Commands 00:20:24.281 ------------ 00:20:24.281 Flush (00h): Supported LBA-Change 00:20:24.281 Write (01h): Supported LBA-Change 00:20:24.281 Read (02h): Supported 00:20:24.281 Compare (05h): Supported 00:20:24.281 Write Zeroes (08h): Supported LBA-Change 00:20:24.281 Dataset Management (09h): Supported LBA-Change 00:20:24.281 Copy (19h): Supported LBA-Change 00:20:24.281 Unknown (79h): Supported LBA-Change 00:20:24.281 Unknown (7Ah): Supported 00:20:24.281 00:20:24.281 Error Log 00:20:24.281 ========= 00:20:24.281 00:20:24.281 Arbitration 00:20:24.281 =========== 00:20:24.281 Arbitration Burst: 1 00:20:24.281 00:20:24.281 Power Management 00:20:24.281 ================ 00:20:24.281 Number of Power States: 1 00:20:24.281 Current Power State: Power State #0 00:20:24.281 Power State #0: 00:20:24.281 Max Power: 0.00 W 00:20:24.281 Non-Operational State: Operational 00:20:24.281 Entry Latency: Not Reported 00:20:24.281 Exit Latency: Not Reported 00:20:24.281 Relative Read Throughput: 0 00:20:24.281 Relative Read Latency: 0 00:20:24.281 Relative Write Throughput: 0 00:20:24.281 Relative Write Latency: 0 00:20:24.281 Idle Power: Not Reported 00:20:24.281 Active Power: Not Reported 00:20:24.281 Non-Operational Permissive Mode: Not Supported 00:20:24.281 00:20:24.281 Health Information 00:20:24.281 ================== 00:20:24.281 Critical Warnings: 00:20:24.281 Available Spare Space: OK 00:20:24.281 Temperature: OK 00:20:24.281 Device Reliability: OK 00:20:24.281 Read Only: No 00:20:24.281 Volatile Memory Backup: OK 00:20:24.281 Current Temperature: 0 Kelvin (-273 Celsius) 00:20:24.281 Temperature Threshold: [2024-12-01 15:00:57.312238] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:24.281 [2024-12-01 15:00:57.312244] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:24.281 [2024-12-01 15:00:57.312247] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x12a0510) 00:20:24.281 [2024-12-01 15:00:57.312254] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:7 cdw10:00000005 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.281 [2024-12-01 15:00:57.312274] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12ed240, cid 7, qid 0 00:20:24.281 [2024-12-01 15:00:57.312340] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:24.281 [2024-12-01 15:00:57.312346] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:24.281 [2024-12-01 15:00:57.312349] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:24.281 [2024-12-01 15:00:57.312352] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x12ed240) on tqpair=0x12a0510 00:20:24.281 [2024-12-01 15:00:57.312392] nvme_ctrlr.c:4220:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Prepare to destruct SSD 00:20:24.281 [2024-12-01 15:00:57.312404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.282 [2024-12-01 15:00:57.312410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.282 [2024-12-01 15:00:57.312415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.282 [2024-12-01 15:00:57.312421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.282 [2024-12-01 15:00:57.312428] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:24.282 [2024-12-01 15:00:57.312432] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:24.282 [2024-12-01 15:00:57.312435] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x12a0510) 00:20:24.282 [2024-12-01 15:00:57.312441] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.282 [2024-12-01 15:00:57.312460] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12eccc0, cid 3, qid 0 00:20:24.282 [2024-12-01 15:00:57.312520] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:24.282 [2024-12-01 15:00:57.312526] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:24.282 [2024-12-01 15:00:57.312529] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:24.282 [2024-12-01 15:00:57.312532] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x12eccc0) on tqpair=0x12a0510 00:20:24.282 [2024-12-01 15:00:57.312540] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:24.282 [2024-12-01 15:00:57.312543] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:24.282 [2024-12-01 15:00:57.312546] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x12a0510) 00:20:24.282 [2024-12-01 15:00:57.312553] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.282 [2024-12-01 15:00:57.312571] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12eccc0, cid 3, qid 0 00:20:24.282 [2024-12-01 15:00:57.312647] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:24.282 [2024-12-01 15:00:57.312652] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:24.282 [2024-12-01 15:00:57.312655] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:24.282 [2024-12-01 15:00:57.312658] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x12eccc0) on tqpair=0x12a0510 00:20:24.282 [2024-12-01 15:00:57.312663] nvme_ctrlr.c:1070:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] RTD3E = 0 us 00:20:24.282 [2024-12-01 15:00:57.312667] nvme_ctrlr.c:1073:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown timeout = 10000 ms 00:20:24.282 [2024-12-01 15:00:57.312676] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:24.282 [2024-12-01 15:00:57.312680] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:24.282 [2024-12-01 15:00:57.312683] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x12a0510) 00:20:24.282 [2024-12-01 15:00:57.312689] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.282 [2024-12-01 15:00:57.312704] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12eccc0, cid 3, qid 0 00:20:24.282 [2024-12-01 15:00:57.316766] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:24.282 [2024-12-01 15:00:57.316781] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:24.282 [2024-12-01 15:00:57.316786] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:24.282 [2024-12-01 15:00:57.316789] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x12eccc0) on tqpair=0x12a0510 00:20:24.282 [2024-12-01 15:00:57.316803] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:24.282 [2024-12-01 15:00:57.316807] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:24.282 [2024-12-01 15:00:57.316810] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x12a0510) 00:20:24.282 [2024-12-01 15:00:57.316818] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.282 [2024-12-01 15:00:57.316839] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12eccc0, cid 3, qid 0 00:20:24.282 [2024-12-01 15:00:57.316916] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:24.282 [2024-12-01 15:00:57.316922] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:24.282 [2024-12-01 15:00:57.316926] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:24.282 [2024-12-01 15:00:57.316929] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x12eccc0) on tqpair=0x12a0510 00:20:24.282 [2024-12-01 15:00:57.316937] nvme_ctrlr.c:1192:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown complete in 4 milliseconds 00:20:24.282 0 Kelvin (-273 Celsius) 00:20:24.282 Available Spare: 0% 00:20:24.282 Available Spare Threshold: 0% 00:20:24.282 Life Percentage Used: 0% 00:20:24.282 Data Units Read: 0 00:20:24.282 Data Units Written: 0 00:20:24.282 Host Read Commands: 0 00:20:24.282 Host Write Commands: 0 00:20:24.282 Controller Busy Time: 0 minutes 00:20:24.282 Power Cycles: 0 00:20:24.282 Power On Hours: 0 hours 00:20:24.282 Unsafe Shutdowns: 0 00:20:24.282 Unrecoverable Media Errors: 0 00:20:24.282 Lifetime Error Log Entries: 0 00:20:24.282 Warning Temperature Time: 0 minutes 00:20:24.282 Critical Temperature Time: 0 minutes 00:20:24.282 00:20:24.282 Number of Queues 00:20:24.282 ================ 00:20:24.282 Number of I/O Submission Queues: 127 00:20:24.282 Number of I/O Completion Queues: 127 00:20:24.282 00:20:24.282 Active Namespaces 00:20:24.282 ================= 00:20:24.282 Namespace ID:1 00:20:24.282 Error Recovery Timeout: Unlimited 00:20:24.282 Command Set Identifier: NVM (00h) 00:20:24.282 Deallocate: Supported 00:20:24.282 Deallocated/Unwritten Error: Not Supported 00:20:24.282 Deallocated Read Value: Unknown 00:20:24.282 Deallocate in Write Zeroes: Not Supported 00:20:24.282 Deallocated Guard Field: 0xFFFF 00:20:24.282 Flush: Supported 00:20:24.282 Reservation: Supported 00:20:24.282 Namespace Sharing Capabilities: Multiple Controllers 00:20:24.282 Size (in LBAs): 131072 (0GiB) 00:20:24.282 Capacity (in LBAs): 131072 (0GiB) 00:20:24.282 Utilization (in LBAs): 131072 (0GiB) 00:20:24.282 NGUID: ABCDEF0123456789ABCDEF0123456789 00:20:24.282 EUI64: ABCDEF0123456789 00:20:24.282 UUID: 0add5ce0-4977-477a-a88d-f49ea2a3b0d0 00:20:24.282 Thin Provisioning: Not Supported 00:20:24.282 Per-NS Atomic Units: Yes 00:20:24.282 Atomic Boundary Size (Normal): 0 00:20:24.282 Atomic Boundary Size (PFail): 0 00:20:24.282 Atomic Boundary Offset: 0 00:20:24.282 Maximum Single Source Range Length: 65535 00:20:24.282 Maximum Copy Length: 65535 00:20:24.282 Maximum Source Range Count: 1 00:20:24.282 NGUID/EUI64 Never Reused: No 00:20:24.282 Namespace Write Protected: No 00:20:24.282 Number of LBA Formats: 1 00:20:24.282 Current LBA Format: LBA Format #00 00:20:24.282 LBA Format #00: Data Size: 512 Metadata Size: 0 00:20:24.282 00:20:24.282 15:00:57 -- host/identify.sh@51 -- # sync 00:20:24.541 15:00:57 -- host/identify.sh@52 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:20:24.541 15:00:57 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:24.541 15:00:57 -- common/autotest_common.sh@10 -- # set +x 00:20:24.541 15:00:57 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:24.541 15:00:57 -- host/identify.sh@54 -- # trap - SIGINT SIGTERM EXIT 00:20:24.541 15:00:57 -- host/identify.sh@56 -- # nvmftestfini 00:20:24.541 15:00:57 -- nvmf/common.sh@476 -- # nvmfcleanup 00:20:24.541 15:00:57 -- nvmf/common.sh@116 -- # sync 00:20:24.541 15:00:57 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:20:24.542 15:00:57 -- nvmf/common.sh@119 -- # set +e 00:20:24.542 15:00:57 -- nvmf/common.sh@120 -- # for i in {1..20} 00:20:24.542 15:00:57 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:20:24.542 rmmod nvme_tcp 00:20:24.542 rmmod nvme_fabrics 00:20:24.542 rmmod nvme_keyring 00:20:24.542 15:00:57 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:20:24.542 15:00:57 -- nvmf/common.sh@123 -- # set -e 00:20:24.542 15:00:57 -- nvmf/common.sh@124 -- # return 0 00:20:24.542 15:00:57 -- nvmf/common.sh@477 -- # '[' -n 93583 ']' 00:20:24.542 15:00:57 -- nvmf/common.sh@478 -- # killprocess 93583 00:20:24.542 15:00:57 -- common/autotest_common.sh@936 -- # '[' -z 93583 ']' 00:20:24.542 15:00:57 -- common/autotest_common.sh@940 -- # kill -0 93583 00:20:24.542 15:00:57 -- common/autotest_common.sh@941 -- # uname 00:20:24.542 15:00:57 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:20:24.542 15:00:57 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 93583 00:20:24.542 15:00:57 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:20:24.542 15:00:57 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:20:24.542 killing process with pid 93583 00:20:24.542 15:00:57 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 93583' 00:20:24.542 15:00:57 -- common/autotest_common.sh@955 -- # kill 93583 00:20:24.542 [2024-12-01 15:00:57.503968] app.c: 883:log_deprecation_hits: *WARNING*: rpc_nvmf_get_subsystems: deprecation 'listener.transport is deprecated in favor of trtype' scheduled for removal in v24.05 hit 1 times 00:20:24.542 15:00:57 -- common/autotest_common.sh@960 -- # wait 93583 00:20:24.801 15:00:57 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:20:24.801 15:00:57 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:20:24.801 15:00:57 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:20:24.801 15:00:57 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:20:24.801 15:00:57 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:20:24.801 15:00:57 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:24.801 15:00:57 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:24.801 15:00:57 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:24.801 15:00:57 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:20:24.801 00:20:24.801 real 0m2.780s 00:20:24.801 user 0m7.681s 00:20:24.801 sys 0m0.766s 00:20:24.801 15:00:57 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:20:24.801 15:00:57 -- common/autotest_common.sh@10 -- # set +x 00:20:24.801 ************************************ 00:20:24.801 END TEST nvmf_identify 00:20:24.801 ************************************ 00:20:24.801 15:00:57 -- nvmf/nvmf.sh@98 -- # run_test nvmf_perf /home/vagrant/spdk_repo/spdk/test/nvmf/host/perf.sh --transport=tcp 00:20:24.801 15:00:57 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:20:24.801 15:00:57 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:20:24.801 15:00:57 -- common/autotest_common.sh@10 -- # set +x 00:20:24.801 ************************************ 00:20:24.801 START TEST nvmf_perf 00:20:24.801 ************************************ 00:20:24.801 15:00:57 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/perf.sh --transport=tcp 00:20:24.801 * Looking for test storage... 00:20:25.061 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:20:25.061 15:00:57 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:20:25.061 15:00:57 -- common/autotest_common.sh@1690 -- # lcov --version 00:20:25.061 15:00:57 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:20:25.061 15:00:57 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:20:25.061 15:00:57 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:20:25.061 15:00:57 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:20:25.061 15:00:57 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:20:25.061 15:00:57 -- scripts/common.sh@335 -- # IFS=.-: 00:20:25.061 15:00:57 -- scripts/common.sh@335 -- # read -ra ver1 00:20:25.061 15:00:57 -- scripts/common.sh@336 -- # IFS=.-: 00:20:25.061 15:00:57 -- scripts/common.sh@336 -- # read -ra ver2 00:20:25.061 15:00:57 -- scripts/common.sh@337 -- # local 'op=<' 00:20:25.061 15:00:57 -- scripts/common.sh@339 -- # ver1_l=2 00:20:25.061 15:00:57 -- scripts/common.sh@340 -- # ver2_l=1 00:20:25.061 15:00:57 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:20:25.061 15:00:57 -- scripts/common.sh@343 -- # case "$op" in 00:20:25.061 15:00:57 -- scripts/common.sh@344 -- # : 1 00:20:25.061 15:00:57 -- scripts/common.sh@363 -- # (( v = 0 )) 00:20:25.061 15:00:57 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:25.061 15:00:57 -- scripts/common.sh@364 -- # decimal 1 00:20:25.061 15:00:57 -- scripts/common.sh@352 -- # local d=1 00:20:25.061 15:00:57 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:25.061 15:00:57 -- scripts/common.sh@354 -- # echo 1 00:20:25.061 15:00:57 -- scripts/common.sh@364 -- # ver1[v]=1 00:20:25.061 15:00:57 -- scripts/common.sh@365 -- # decimal 2 00:20:25.061 15:00:57 -- scripts/common.sh@352 -- # local d=2 00:20:25.061 15:00:57 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:20:25.061 15:00:57 -- scripts/common.sh@354 -- # echo 2 00:20:25.061 15:00:57 -- scripts/common.sh@365 -- # ver2[v]=2 00:20:25.061 15:00:57 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:20:25.061 15:00:57 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:20:25.061 15:00:57 -- scripts/common.sh@367 -- # return 0 00:20:25.061 15:00:57 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:20:25.061 15:00:57 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:20:25.061 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:25.061 --rc genhtml_branch_coverage=1 00:20:25.061 --rc genhtml_function_coverage=1 00:20:25.061 --rc genhtml_legend=1 00:20:25.061 --rc geninfo_all_blocks=1 00:20:25.061 --rc geninfo_unexecuted_blocks=1 00:20:25.061 00:20:25.061 ' 00:20:25.061 15:00:57 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:20:25.061 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:25.061 --rc genhtml_branch_coverage=1 00:20:25.061 --rc genhtml_function_coverage=1 00:20:25.061 --rc genhtml_legend=1 00:20:25.061 --rc geninfo_all_blocks=1 00:20:25.061 --rc geninfo_unexecuted_blocks=1 00:20:25.061 00:20:25.061 ' 00:20:25.061 15:00:57 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:20:25.061 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:25.061 --rc genhtml_branch_coverage=1 00:20:25.061 --rc genhtml_function_coverage=1 00:20:25.061 --rc genhtml_legend=1 00:20:25.061 --rc geninfo_all_blocks=1 00:20:25.061 --rc geninfo_unexecuted_blocks=1 00:20:25.061 00:20:25.061 ' 00:20:25.061 15:00:58 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:20:25.061 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:25.061 --rc genhtml_branch_coverage=1 00:20:25.061 --rc genhtml_function_coverage=1 00:20:25.061 --rc genhtml_legend=1 00:20:25.061 --rc geninfo_all_blocks=1 00:20:25.061 --rc geninfo_unexecuted_blocks=1 00:20:25.061 00:20:25.061 ' 00:20:25.061 15:00:58 -- host/perf.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:20:25.061 15:00:58 -- nvmf/common.sh@7 -- # uname -s 00:20:25.061 15:00:58 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:25.061 15:00:58 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:25.061 15:00:58 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:25.061 15:00:58 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:25.061 15:00:58 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:25.061 15:00:58 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:25.061 15:00:58 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:25.061 15:00:58 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:25.061 15:00:58 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:25.061 15:00:58 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:25.061 15:00:58 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:2d843004-a791-47f3-8dd7-3d04462c368b 00:20:25.061 15:00:58 -- nvmf/common.sh@18 -- # NVME_HOSTID=2d843004-a791-47f3-8dd7-3d04462c368b 00:20:25.061 15:00:58 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:25.061 15:00:58 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:25.061 15:00:58 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:20:25.061 15:00:58 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:20:25.061 15:00:58 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:25.061 15:00:58 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:25.061 15:00:58 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:25.061 15:00:58 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:25.061 15:00:58 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:25.061 15:00:58 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:25.062 15:00:58 -- paths/export.sh@5 -- # export PATH 00:20:25.062 15:00:58 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:25.062 15:00:58 -- nvmf/common.sh@46 -- # : 0 00:20:25.062 15:00:58 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:20:25.062 15:00:58 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:20:25.062 15:00:58 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:20:25.062 15:00:58 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:25.062 15:00:58 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:25.062 15:00:58 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:20:25.062 15:00:58 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:20:25.062 15:00:58 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:20:25.062 15:00:58 -- host/perf.sh@12 -- # MALLOC_BDEV_SIZE=64 00:20:25.062 15:00:58 -- host/perf.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:20:25.062 15:00:58 -- host/perf.sh@15 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:20:25.062 15:00:58 -- host/perf.sh@17 -- # nvmftestinit 00:20:25.062 15:00:58 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:20:25.062 15:00:58 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:25.062 15:00:58 -- nvmf/common.sh@436 -- # prepare_net_devs 00:20:25.062 15:00:58 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:20:25.062 15:00:58 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:20:25.062 15:00:58 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:25.062 15:00:58 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:25.062 15:00:58 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:25.062 15:00:58 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:20:25.062 15:00:58 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:20:25.062 15:00:58 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:20:25.062 15:00:58 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:20:25.062 15:00:58 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:20:25.062 15:00:58 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:20:25.062 15:00:58 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:25.062 15:00:58 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:25.062 15:00:58 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:20:25.062 15:00:58 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:20:25.062 15:00:58 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:20:25.062 15:00:58 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:20:25.062 15:00:58 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:20:25.062 15:00:58 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:25.062 15:00:58 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:20:25.062 15:00:58 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:20:25.062 15:00:58 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:20:25.062 15:00:58 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:20:25.062 15:00:58 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:20:25.062 15:00:58 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:20:25.062 Cannot find device "nvmf_tgt_br" 00:20:25.062 15:00:58 -- nvmf/common.sh@154 -- # true 00:20:25.062 15:00:58 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:20:25.062 Cannot find device "nvmf_tgt_br2" 00:20:25.062 15:00:58 -- nvmf/common.sh@155 -- # true 00:20:25.062 15:00:58 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:20:25.062 15:00:58 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:20:25.062 Cannot find device "nvmf_tgt_br" 00:20:25.062 15:00:58 -- nvmf/common.sh@157 -- # true 00:20:25.062 15:00:58 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:20:25.062 Cannot find device "nvmf_tgt_br2" 00:20:25.062 15:00:58 -- nvmf/common.sh@158 -- # true 00:20:25.062 15:00:58 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:20:25.062 15:00:58 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:20:25.062 15:00:58 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:20:25.062 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:20:25.062 15:00:58 -- nvmf/common.sh@161 -- # true 00:20:25.062 15:00:58 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:20:25.062 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:20:25.062 15:00:58 -- nvmf/common.sh@162 -- # true 00:20:25.062 15:00:58 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:20:25.062 15:00:58 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:20:25.321 15:00:58 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:20:25.321 15:00:58 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:20:25.321 15:00:58 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:20:25.321 15:00:58 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:20:25.321 15:00:58 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:20:25.321 15:00:58 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:20:25.321 15:00:58 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:20:25.321 15:00:58 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:20:25.321 15:00:58 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:20:25.321 15:00:58 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:20:25.321 15:00:58 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:20:25.321 15:00:58 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:20:25.321 15:00:58 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:20:25.321 15:00:58 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:20:25.321 15:00:58 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:20:25.321 15:00:58 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:20:25.321 15:00:58 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:20:25.321 15:00:58 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:20:25.321 15:00:58 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:20:25.321 15:00:58 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:20:25.321 15:00:58 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:20:25.321 15:00:58 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:20:25.321 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:25.321 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.073 ms 00:20:25.321 00:20:25.321 --- 10.0.0.2 ping statistics --- 00:20:25.321 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:25.321 rtt min/avg/max/mdev = 0.073/0.073/0.073/0.000 ms 00:20:25.321 15:00:58 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:20:25.321 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:20:25.321 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.053 ms 00:20:25.321 00:20:25.321 --- 10.0.0.3 ping statistics --- 00:20:25.321 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:25.321 rtt min/avg/max/mdev = 0.053/0.053/0.053/0.000 ms 00:20:25.321 15:00:58 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:20:25.321 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:25.321 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.025 ms 00:20:25.321 00:20:25.321 --- 10.0.0.1 ping statistics --- 00:20:25.321 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:25.321 rtt min/avg/max/mdev = 0.025/0.025/0.025/0.000 ms 00:20:25.321 15:00:58 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:25.321 15:00:58 -- nvmf/common.sh@421 -- # return 0 00:20:25.321 15:00:58 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:20:25.321 15:00:58 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:25.322 15:00:58 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:20:25.322 15:00:58 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:20:25.322 15:00:58 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:25.322 15:00:58 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:20:25.322 15:00:58 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:20:25.322 15:00:58 -- host/perf.sh@18 -- # nvmfappstart -m 0xF 00:20:25.322 15:00:58 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:20:25.322 15:00:58 -- common/autotest_common.sh@722 -- # xtrace_disable 00:20:25.322 15:00:58 -- common/autotest_common.sh@10 -- # set +x 00:20:25.322 15:00:58 -- nvmf/common.sh@469 -- # nvmfpid=93821 00:20:25.322 15:00:58 -- nvmf/common.sh@470 -- # waitforlisten 93821 00:20:25.322 15:00:58 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:20:25.322 15:00:58 -- common/autotest_common.sh@829 -- # '[' -z 93821 ']' 00:20:25.322 15:00:58 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:25.322 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:25.322 15:00:58 -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:25.322 15:00:58 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:25.322 15:00:58 -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:25.322 15:00:58 -- common/autotest_common.sh@10 -- # set +x 00:20:25.585 [2024-12-01 15:00:58.462254] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:20:25.586 [2024-12-01 15:00:58.462348] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:25.586 [2024-12-01 15:00:58.602838] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:20:25.586 [2024-12-01 15:00:58.658330] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:20:25.586 [2024-12-01 15:00:58.658664] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:25.586 [2024-12-01 15:00:58.658762] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:25.586 [2024-12-01 15:00:58.658842] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:25.586 [2024-12-01 15:00:58.659055] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:20:25.586 [2024-12-01 15:00:58.659176] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:20:25.586 [2024-12-01 15:00:58.659890] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:20:25.586 [2024-12-01 15:00:58.659901] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:20:26.524 15:00:59 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:26.524 15:00:59 -- common/autotest_common.sh@862 -- # return 0 00:20:26.524 15:00:59 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:20:26.524 15:00:59 -- common/autotest_common.sh@728 -- # xtrace_disable 00:20:26.524 15:00:59 -- common/autotest_common.sh@10 -- # set +x 00:20:26.524 15:00:59 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:26.524 15:00:59 -- host/perf.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:20:26.524 15:00:59 -- host/perf.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_subsystem_config 00:20:27.092 15:00:59 -- host/perf.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py framework_get_config bdev 00:20:27.092 15:00:59 -- host/perf.sh@30 -- # jq -r '.[].params | select(.name=="Nvme0").traddr' 00:20:27.351 15:01:00 -- host/perf.sh@30 -- # local_nvme_trid=0000:00:06.0 00:20:27.351 15:01:00 -- host/perf.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:20:27.610 15:01:00 -- host/perf.sh@31 -- # bdevs=' Malloc0' 00:20:27.610 15:01:00 -- host/perf.sh@33 -- # '[' -n 0000:00:06.0 ']' 00:20:27.610 15:01:00 -- host/perf.sh@34 -- # bdevs=' Malloc0 Nvme0n1' 00:20:27.610 15:01:00 -- host/perf.sh@37 -- # '[' tcp == rdma ']' 00:20:27.610 15:01:00 -- host/perf.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:20:27.610 [2024-12-01 15:01:00.688459] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:27.610 15:01:00 -- host/perf.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:20:28.178 15:01:00 -- host/perf.sh@45 -- # for bdev in $bdevs 00:20:28.178 15:01:00 -- host/perf.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:20:28.178 15:01:01 -- host/perf.sh@45 -- # for bdev in $bdevs 00:20:28.178 15:01:01 -- host/perf.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:20:28.437 15:01:01 -- host/perf.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:20:28.696 [2024-12-01 15:01:01.695363] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:28.696 15:01:01 -- host/perf.sh@49 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:20:28.957 15:01:01 -- host/perf.sh@52 -- # '[' -n 0000:00:06.0 ']' 00:20:28.957 15:01:01 -- host/perf.sh@53 -- # perf_app -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:00:06.0' 00:20:28.957 15:01:01 -- host/perf.sh@21 -- # '[' 0 -eq 1 ']' 00:20:28.957 15:01:01 -- host/perf.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:00:06.0' 00:20:30.333 Initializing NVMe Controllers 00:20:30.333 Attached to NVMe Controller at 0000:00:06.0 [1b36:0010] 00:20:30.333 Associating PCIE (0000:00:06.0) NSID 1 with lcore 0 00:20:30.333 Initialization complete. Launching workers. 00:20:30.333 ======================================================== 00:20:30.333 Latency(us) 00:20:30.333 Device Information : IOPS MiB/s Average min max 00:20:30.333 PCIE (0000:00:06.0) NSID 1 from core 0: 23774.10 92.87 1346.30 312.37 7439.45 00:20:30.333 ======================================================== 00:20:30.333 Total : 23774.10 92.87 1346.30 312.37 7439.45 00:20:30.333 00:20:30.333 15:01:03 -- host/perf.sh@56 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 1 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:20:31.710 Initializing NVMe Controllers 00:20:31.710 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:20:31.710 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:20:31.710 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:20:31.710 Initialization complete. Launching workers. 00:20:31.710 ======================================================== 00:20:31.710 Latency(us) 00:20:31.710 Device Information : IOPS MiB/s Average min max 00:20:31.710 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 3719.96 14.53 268.45 100.65 7128.97 00:20:31.710 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 125.00 0.49 8063.44 4757.65 12001.85 00:20:31.710 ======================================================== 00:20:31.710 Total : 3844.96 15.02 521.86 100.65 12001.85 00:20:31.710 00:20:31.710 15:01:04 -- host/perf.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 32 -o 4096 -w randrw -M 50 -t 1 -HI -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:20:32.661 [2024-12-01 15:01:05.655784] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a090f0 is same with the state(5) to be set 00:20:32.661 [2024-12-01 15:01:05.655832] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a090f0 is same with the state(5) to be set 00:20:32.661 [2024-12-01 15:01:05.655852] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a090f0 is same with the state(5) to be set 00:20:32.661 [2024-12-01 15:01:05.655860] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a090f0 is same with the state(5) to be set 00:20:32.661 [2024-12-01 15:01:05.655867] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a090f0 is same with the state(5) to be set 00:20:32.661 [2024-12-01 15:01:05.655874] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a090f0 is same with the state(5) to be set 00:20:32.661 [2024-12-01 15:01:05.655882] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a090f0 is same with the state(5) to be set 00:20:32.661 [2024-12-01 15:01:05.655888] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a090f0 is same with the state(5) to be set 00:20:32.661 [2024-12-01 15:01:05.655896] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a090f0 is same with the state(5) to be set 00:20:32.661 [2024-12-01 15:01:05.655903] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a090f0 is same with the state(5) to be set 00:20:32.661 [2024-12-01 15:01:05.655910] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a090f0 is same with the state(5) to be set 00:20:32.661 [2024-12-01 15:01:05.655917] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a090f0 is same with the state(5) to be set 00:20:32.661 [2024-12-01 15:01:05.655924] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a090f0 is same with the state(5) to be set 00:20:32.661 [2024-12-01 15:01:05.655930] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a090f0 is same with the state(5) to be set 00:20:32.661 [2024-12-01 15:01:05.655937] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a090f0 is same with the state(5) to be set 00:20:32.931 Initializing NVMe Controllers 00:20:32.931 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:20:32.931 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:20:32.931 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:20:32.931 Initialization complete. Launching workers. 00:20:32.931 ======================================================== 00:20:32.931 Latency(us) 00:20:32.931 Device Information : IOPS MiB/s Average min max 00:20:32.931 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 10474.30 40.92 3055.04 566.11 6495.57 00:20:32.931 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 2704.62 10.56 11965.31 6810.17 20170.30 00:20:32.931 ======================================================== 00:20:32.931 Total : 13178.92 51.48 4883.63 566.11 20170.30 00:20:32.931 00:20:32.931 15:01:05 -- host/perf.sh@59 -- # [[ '' == \e\8\1\0 ]] 00:20:32.931 15:01:05 -- host/perf.sh@60 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -O 16384 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:20:35.468 Initializing NVMe Controllers 00:20:35.468 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:20:35.468 Controller IO queue size 128, less than required. 00:20:35.468 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:20:35.468 Controller IO queue size 128, less than required. 00:20:35.468 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:20:35.468 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:20:35.468 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:20:35.468 Initialization complete. Launching workers. 00:20:35.468 ======================================================== 00:20:35.468 Latency(us) 00:20:35.468 Device Information : IOPS MiB/s Average min max 00:20:35.468 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1681.13 420.28 77285.46 49500.85 147615.75 00:20:35.468 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 567.70 141.92 232066.58 64388.37 362790.06 00:20:35.468 ======================================================== 00:20:35.468 Total : 2248.83 562.21 116358.76 49500.85 362790.06 00:20:35.468 00:20:35.468 15:01:08 -- host/perf.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 36964 -O 4096 -w randrw -M 50 -t 5 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0xf -P 4 00:20:35.468 No valid NVMe controllers or AIO or URING devices found 00:20:35.468 Initializing NVMe Controllers 00:20:35.468 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:20:35.468 Controller IO queue size 128, less than required. 00:20:35.468 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:20:35.468 WARNING: IO size 36964 (-o) is not a multiple of nsid 1 sector size 512. Removing this ns from test 00:20:35.468 Controller IO queue size 128, less than required. 00:20:35.468 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:20:35.468 WARNING: IO size 36964 (-o) is not a multiple of nsid 2 sector size 4096. Removing this ns from test 00:20:35.468 WARNING: Some requested NVMe devices were skipped 00:20:35.468 15:01:08 -- host/perf.sh@65 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' --transport-stat 00:20:38.003 Initializing NVMe Controllers 00:20:38.003 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:20:38.003 Controller IO queue size 128, less than required. 00:20:38.003 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:20:38.003 Controller IO queue size 128, less than required. 00:20:38.003 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:20:38.003 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:20:38.003 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:20:38.003 Initialization complete. Launching workers. 00:20:38.003 00:20:38.003 ==================== 00:20:38.003 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 statistics: 00:20:38.003 TCP transport: 00:20:38.003 polls: 10421 00:20:38.003 idle_polls: 6603 00:20:38.003 sock_completions: 3818 00:20:38.003 nvme_completions: 4470 00:20:38.003 submitted_requests: 6857 00:20:38.003 queued_requests: 1 00:20:38.003 00:20:38.003 ==================== 00:20:38.003 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 statistics: 00:20:38.003 TCP transport: 00:20:38.003 polls: 13556 00:20:38.003 idle_polls: 9898 00:20:38.003 sock_completions: 3658 00:20:38.003 nvme_completions: 6947 00:20:38.003 submitted_requests: 10661 00:20:38.003 queued_requests: 1 00:20:38.003 ======================================================== 00:20:38.003 Latency(us) 00:20:38.003 Device Information : IOPS MiB/s Average min max 00:20:38.003 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1180.89 295.22 111514.24 67593.32 179771.22 00:20:38.003 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 1800.34 450.08 72188.05 34266.82 107664.98 00:20:38.003 ======================================================== 00:20:38.003 Total : 2981.23 745.31 87765.52 34266.82 179771.22 00:20:38.003 00:20:38.003 15:01:11 -- host/perf.sh@66 -- # sync 00:20:38.262 15:01:11 -- host/perf.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:20:38.520 15:01:11 -- host/perf.sh@69 -- # '[' 1 -eq 1 ']' 00:20:38.520 15:01:11 -- host/perf.sh@71 -- # '[' -n 0000:00:06.0 ']' 00:20:38.520 15:01:11 -- host/perf.sh@72 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore Nvme0n1 lvs_0 00:20:38.780 15:01:11 -- host/perf.sh@72 -- # ls_guid=d09c3c15-5f5d-4534-803a-8ea6de33c97d 00:20:38.780 15:01:11 -- host/perf.sh@73 -- # get_lvs_free_mb d09c3c15-5f5d-4534-803a-8ea6de33c97d 00:20:38.780 15:01:11 -- common/autotest_common.sh@1353 -- # local lvs_uuid=d09c3c15-5f5d-4534-803a-8ea6de33c97d 00:20:38.780 15:01:11 -- common/autotest_common.sh@1354 -- # local lvs_info 00:20:38.780 15:01:11 -- common/autotest_common.sh@1355 -- # local fc 00:20:38.780 15:01:11 -- common/autotest_common.sh@1356 -- # local cs 00:20:38.780 15:01:11 -- common/autotest_common.sh@1357 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:20:38.780 15:01:11 -- common/autotest_common.sh@1357 -- # lvs_info='[ 00:20:38.780 { 00:20:38.780 "base_bdev": "Nvme0n1", 00:20:38.780 "block_size": 4096, 00:20:38.780 "cluster_size": 4194304, 00:20:38.780 "free_clusters": 1278, 00:20:38.780 "name": "lvs_0", 00:20:38.780 "total_data_clusters": 1278, 00:20:38.780 "uuid": "d09c3c15-5f5d-4534-803a-8ea6de33c97d" 00:20:38.780 } 00:20:38.780 ]' 00:20:38.780 15:01:11 -- common/autotest_common.sh@1358 -- # jq '.[] | select(.uuid=="d09c3c15-5f5d-4534-803a-8ea6de33c97d") .free_clusters' 00:20:39.039 15:01:11 -- common/autotest_common.sh@1358 -- # fc=1278 00:20:39.039 15:01:11 -- common/autotest_common.sh@1359 -- # jq '.[] | select(.uuid=="d09c3c15-5f5d-4534-803a-8ea6de33c97d") .cluster_size' 00:20:39.039 5112 00:20:39.039 15:01:11 -- common/autotest_common.sh@1359 -- # cs=4194304 00:20:39.039 15:01:11 -- common/autotest_common.sh@1362 -- # free_mb=5112 00:20:39.039 15:01:11 -- common/autotest_common.sh@1363 -- # echo 5112 00:20:39.039 15:01:11 -- host/perf.sh@77 -- # '[' 5112 -gt 20480 ']' 00:20:39.039 15:01:11 -- host/perf.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u d09c3c15-5f5d-4534-803a-8ea6de33c97d lbd_0 5112 00:20:39.298 15:01:12 -- host/perf.sh@80 -- # lb_guid=8bceef27-89e8-4386-811b-8bd47d65f609 00:20:39.298 15:01:12 -- host/perf.sh@83 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore 8bceef27-89e8-4386-811b-8bd47d65f609 lvs_n_0 00:20:39.556 15:01:12 -- host/perf.sh@83 -- # ls_nested_guid=8955153a-eea4-460f-befd-861f13154bff 00:20:39.556 15:01:12 -- host/perf.sh@84 -- # get_lvs_free_mb 8955153a-eea4-460f-befd-861f13154bff 00:20:39.556 15:01:12 -- common/autotest_common.sh@1353 -- # local lvs_uuid=8955153a-eea4-460f-befd-861f13154bff 00:20:39.556 15:01:12 -- common/autotest_common.sh@1354 -- # local lvs_info 00:20:39.556 15:01:12 -- common/autotest_common.sh@1355 -- # local fc 00:20:39.556 15:01:12 -- common/autotest_common.sh@1356 -- # local cs 00:20:39.556 15:01:12 -- common/autotest_common.sh@1357 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:20:39.816 15:01:12 -- common/autotest_common.sh@1357 -- # lvs_info='[ 00:20:39.816 { 00:20:39.816 "base_bdev": "Nvme0n1", 00:20:39.816 "block_size": 4096, 00:20:39.816 "cluster_size": 4194304, 00:20:39.816 "free_clusters": 0, 00:20:39.816 "name": "lvs_0", 00:20:39.816 "total_data_clusters": 1278, 00:20:39.816 "uuid": "d09c3c15-5f5d-4534-803a-8ea6de33c97d" 00:20:39.816 }, 00:20:39.816 { 00:20:39.816 "base_bdev": "8bceef27-89e8-4386-811b-8bd47d65f609", 00:20:39.816 "block_size": 4096, 00:20:39.816 "cluster_size": 4194304, 00:20:39.816 "free_clusters": 1276, 00:20:39.816 "name": "lvs_n_0", 00:20:39.816 "total_data_clusters": 1276, 00:20:39.816 "uuid": "8955153a-eea4-460f-befd-861f13154bff" 00:20:39.816 } 00:20:39.816 ]' 00:20:39.816 15:01:12 -- common/autotest_common.sh@1358 -- # jq '.[] | select(.uuid=="8955153a-eea4-460f-befd-861f13154bff") .free_clusters' 00:20:39.816 15:01:12 -- common/autotest_common.sh@1358 -- # fc=1276 00:20:39.816 15:01:12 -- common/autotest_common.sh@1359 -- # jq '.[] | select(.uuid=="8955153a-eea4-460f-befd-861f13154bff") .cluster_size' 00:20:40.075 5104 00:20:40.075 15:01:12 -- common/autotest_common.sh@1359 -- # cs=4194304 00:20:40.075 15:01:12 -- common/autotest_common.sh@1362 -- # free_mb=5104 00:20:40.075 15:01:12 -- common/autotest_common.sh@1363 -- # echo 5104 00:20:40.075 15:01:12 -- host/perf.sh@85 -- # '[' 5104 -gt 20480 ']' 00:20:40.075 15:01:12 -- host/perf.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 8955153a-eea4-460f-befd-861f13154bff lbd_nest_0 5104 00:20:40.075 15:01:13 -- host/perf.sh@88 -- # lb_nested_guid=274082a8-5569-4018-8a3f-8333132c4de1 00:20:40.075 15:01:13 -- host/perf.sh@89 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:20:40.333 15:01:13 -- host/perf.sh@90 -- # for bdev in $lb_nested_guid 00:20:40.333 15:01:13 -- host/perf.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 274082a8-5569-4018-8a3f-8333132c4de1 00:20:40.593 15:01:13 -- host/perf.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:20:40.852 15:01:13 -- host/perf.sh@95 -- # qd_depth=("1" "32" "128") 00:20:40.852 15:01:13 -- host/perf.sh@96 -- # io_size=("512" "131072") 00:20:40.852 15:01:13 -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:20:40.852 15:01:13 -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:20:40.852 15:01:13 -- host/perf.sh@99 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 1 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:20:41.111 No valid NVMe controllers or AIO or URING devices found 00:20:41.370 Initializing NVMe Controllers 00:20:41.370 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:20:41.370 WARNING: controller SPDK bdev Controller (SPDK00000000000001 ) ns 1 has invalid ns size 5351931904 / block size 4096 for I/O size 512 00:20:41.370 WARNING: Some requested NVMe devices were skipped 00:20:41.370 15:01:14 -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:20:41.370 15:01:14 -- host/perf.sh@99 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 1 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:20:53.575 Initializing NVMe Controllers 00:20:53.575 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:20:53.575 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:20:53.575 Initialization complete. Launching workers. 00:20:53.575 ======================================================== 00:20:53.575 Latency(us) 00:20:53.575 Device Information : IOPS MiB/s Average min max 00:20:53.575 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 863.19 107.90 1157.98 395.69 8268.39 00:20:53.575 ======================================================== 00:20:53.575 Total : 863.19 107.90 1157.98 395.69 8268.39 00:20:53.575 00:20:53.575 15:01:24 -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:20:53.575 15:01:24 -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:20:53.575 15:01:24 -- host/perf.sh@99 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 32 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:20:53.575 No valid NVMe controllers or AIO or URING devices found 00:20:53.575 Initializing NVMe Controllers 00:20:53.575 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:20:53.575 WARNING: controller SPDK bdev Controller (SPDK00000000000001 ) ns 1 has invalid ns size 5351931904 / block size 4096 for I/O size 512 00:20:53.575 WARNING: Some requested NVMe devices were skipped 00:20:53.575 15:01:24 -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:20:53.575 15:01:24 -- host/perf.sh@99 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 32 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:21:03.550 Initializing NVMe Controllers 00:21:03.550 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:21:03.550 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:21:03.550 Initialization complete. Launching workers. 00:21:03.550 ======================================================== 00:21:03.550 Latency(us) 00:21:03.550 Device Information : IOPS MiB/s Average min max 00:21:03.550 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1078.19 134.77 29696.52 7874.06 244997.15 00:21:03.550 ======================================================== 00:21:03.550 Total : 1078.19 134.77 29696.52 7874.06 244997.15 00:21:03.550 00:21:03.550 15:01:35 -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:21:03.550 15:01:35 -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:21:03.550 15:01:35 -- host/perf.sh@99 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:21:03.550 No valid NVMe controllers or AIO or URING devices found 00:21:03.550 Initializing NVMe Controllers 00:21:03.550 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:21:03.550 WARNING: controller SPDK bdev Controller (SPDK00000000000001 ) ns 1 has invalid ns size 5351931904 / block size 4096 for I/O size 512 00:21:03.550 WARNING: Some requested NVMe devices were skipped 00:21:03.550 15:01:35 -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:21:03.550 15:01:35 -- host/perf.sh@99 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:21:13.529 Initializing NVMe Controllers 00:21:13.529 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:21:13.529 Controller IO queue size 128, less than required. 00:21:13.529 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:21:13.529 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:21:13.529 Initialization complete. Launching workers. 00:21:13.529 ======================================================== 00:21:13.529 Latency(us) 00:21:13.529 Device Information : IOPS MiB/s Average min max 00:21:13.529 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 3639.72 454.96 35208.70 12454.55 73519.44 00:21:13.529 ======================================================== 00:21:13.529 Total : 3639.72 454.96 35208.70 12454.55 73519.44 00:21:13.529 00:21:13.529 15:01:45 -- host/perf.sh@104 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:21:13.529 15:01:46 -- host/perf.sh@105 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete 274082a8-5569-4018-8a3f-8333132c4de1 00:21:13.529 15:01:46 -- host/perf.sh@106 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_n_0 00:21:13.529 15:01:46 -- host/perf.sh@107 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete 8bceef27-89e8-4386-811b-8bd47d65f609 00:21:13.788 15:01:46 -- host/perf.sh@108 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_0 00:21:14.048 15:01:47 -- host/perf.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:21:14.048 15:01:47 -- host/perf.sh@114 -- # nvmftestfini 00:21:14.048 15:01:47 -- nvmf/common.sh@476 -- # nvmfcleanup 00:21:14.048 15:01:47 -- nvmf/common.sh@116 -- # sync 00:21:14.307 15:01:47 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:21:14.307 15:01:47 -- nvmf/common.sh@119 -- # set +e 00:21:14.307 15:01:47 -- nvmf/common.sh@120 -- # for i in {1..20} 00:21:14.307 15:01:47 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:21:14.307 rmmod nvme_tcp 00:21:14.307 rmmod nvme_fabrics 00:21:14.307 rmmod nvme_keyring 00:21:14.307 15:01:47 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:21:14.307 15:01:47 -- nvmf/common.sh@123 -- # set -e 00:21:14.307 15:01:47 -- nvmf/common.sh@124 -- # return 0 00:21:14.307 15:01:47 -- nvmf/common.sh@477 -- # '[' -n 93821 ']' 00:21:14.307 15:01:47 -- nvmf/common.sh@478 -- # killprocess 93821 00:21:14.307 15:01:47 -- common/autotest_common.sh@936 -- # '[' -z 93821 ']' 00:21:14.307 15:01:47 -- common/autotest_common.sh@940 -- # kill -0 93821 00:21:14.307 15:01:47 -- common/autotest_common.sh@941 -- # uname 00:21:14.307 15:01:47 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:21:14.307 15:01:47 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 93821 00:21:14.307 killing process with pid 93821 00:21:14.307 15:01:47 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:21:14.307 15:01:47 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:21:14.307 15:01:47 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 93821' 00:21:14.307 15:01:47 -- common/autotest_common.sh@955 -- # kill 93821 00:21:14.307 15:01:47 -- common/autotest_common.sh@960 -- # wait 93821 00:21:15.686 15:01:48 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:21:15.686 15:01:48 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:21:15.686 15:01:48 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:21:15.686 15:01:48 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:21:15.686 15:01:48 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:21:15.686 15:01:48 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:15.686 15:01:48 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:15.686 15:01:48 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:15.686 15:01:48 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:21:15.686 00:21:15.686 real 0m50.903s 00:21:15.686 user 3m11.643s 00:21:15.686 sys 0m10.543s 00:21:15.686 15:01:48 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:21:15.687 15:01:48 -- common/autotest_common.sh@10 -- # set +x 00:21:15.687 ************************************ 00:21:15.687 END TEST nvmf_perf 00:21:15.687 ************************************ 00:21:15.687 15:01:48 -- nvmf/nvmf.sh@99 -- # run_test nvmf_fio_host /home/vagrant/spdk_repo/spdk/test/nvmf/host/fio.sh --transport=tcp 00:21:15.687 15:01:48 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:21:15.687 15:01:48 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:21:15.687 15:01:48 -- common/autotest_common.sh@10 -- # set +x 00:21:15.687 ************************************ 00:21:15.687 START TEST nvmf_fio_host 00:21:15.687 ************************************ 00:21:15.687 15:01:48 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/fio.sh --transport=tcp 00:21:15.944 * Looking for test storage... 00:21:15.944 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:21:15.944 15:01:48 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:21:15.944 15:01:48 -- common/autotest_common.sh@1690 -- # lcov --version 00:21:15.944 15:01:48 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:21:15.944 15:01:48 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:21:15.944 15:01:48 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:21:15.944 15:01:48 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:21:15.944 15:01:48 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:21:15.944 15:01:48 -- scripts/common.sh@335 -- # IFS=.-: 00:21:15.944 15:01:48 -- scripts/common.sh@335 -- # read -ra ver1 00:21:15.944 15:01:48 -- scripts/common.sh@336 -- # IFS=.-: 00:21:15.944 15:01:48 -- scripts/common.sh@336 -- # read -ra ver2 00:21:15.944 15:01:48 -- scripts/common.sh@337 -- # local 'op=<' 00:21:15.944 15:01:48 -- scripts/common.sh@339 -- # ver1_l=2 00:21:15.944 15:01:48 -- scripts/common.sh@340 -- # ver2_l=1 00:21:15.944 15:01:48 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:21:15.944 15:01:48 -- scripts/common.sh@343 -- # case "$op" in 00:21:15.944 15:01:48 -- scripts/common.sh@344 -- # : 1 00:21:15.944 15:01:48 -- scripts/common.sh@363 -- # (( v = 0 )) 00:21:15.944 15:01:48 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:15.944 15:01:48 -- scripts/common.sh@364 -- # decimal 1 00:21:15.944 15:01:48 -- scripts/common.sh@352 -- # local d=1 00:21:15.944 15:01:48 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:15.944 15:01:48 -- scripts/common.sh@354 -- # echo 1 00:21:15.944 15:01:48 -- scripts/common.sh@364 -- # ver1[v]=1 00:21:15.944 15:01:48 -- scripts/common.sh@365 -- # decimal 2 00:21:15.944 15:01:48 -- scripts/common.sh@352 -- # local d=2 00:21:15.944 15:01:48 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:21:15.944 15:01:48 -- scripts/common.sh@354 -- # echo 2 00:21:15.944 15:01:48 -- scripts/common.sh@365 -- # ver2[v]=2 00:21:15.944 15:01:48 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:21:15.944 15:01:48 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:21:15.944 15:01:48 -- scripts/common.sh@367 -- # return 0 00:21:15.944 15:01:48 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:15.944 15:01:48 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:21:15.944 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:15.944 --rc genhtml_branch_coverage=1 00:21:15.944 --rc genhtml_function_coverage=1 00:21:15.944 --rc genhtml_legend=1 00:21:15.944 --rc geninfo_all_blocks=1 00:21:15.944 --rc geninfo_unexecuted_blocks=1 00:21:15.944 00:21:15.944 ' 00:21:15.944 15:01:48 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:21:15.944 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:15.944 --rc genhtml_branch_coverage=1 00:21:15.944 --rc genhtml_function_coverage=1 00:21:15.944 --rc genhtml_legend=1 00:21:15.944 --rc geninfo_all_blocks=1 00:21:15.944 --rc geninfo_unexecuted_blocks=1 00:21:15.944 00:21:15.944 ' 00:21:15.944 15:01:48 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:21:15.944 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:15.944 --rc genhtml_branch_coverage=1 00:21:15.944 --rc genhtml_function_coverage=1 00:21:15.944 --rc genhtml_legend=1 00:21:15.944 --rc geninfo_all_blocks=1 00:21:15.944 --rc geninfo_unexecuted_blocks=1 00:21:15.944 00:21:15.944 ' 00:21:15.944 15:01:48 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:21:15.944 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:15.944 --rc genhtml_branch_coverage=1 00:21:15.944 --rc genhtml_function_coverage=1 00:21:15.944 --rc genhtml_legend=1 00:21:15.944 --rc geninfo_all_blocks=1 00:21:15.944 --rc geninfo_unexecuted_blocks=1 00:21:15.944 00:21:15.944 ' 00:21:15.944 15:01:48 -- host/fio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:21:15.944 15:01:48 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:15.944 15:01:48 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:15.944 15:01:48 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:15.945 15:01:48 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:15.945 15:01:48 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:15.945 15:01:48 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:15.945 15:01:48 -- paths/export.sh@5 -- # export PATH 00:21:15.945 15:01:48 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:15.945 15:01:48 -- host/fio.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:21:15.945 15:01:48 -- nvmf/common.sh@7 -- # uname -s 00:21:15.945 15:01:48 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:15.945 15:01:48 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:15.945 15:01:48 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:15.945 15:01:48 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:15.945 15:01:48 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:15.945 15:01:48 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:15.945 15:01:48 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:15.945 15:01:48 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:15.945 15:01:48 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:15.945 15:01:48 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:15.945 15:01:49 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:2d843004-a791-47f3-8dd7-3d04462c368b 00:21:15.945 15:01:49 -- nvmf/common.sh@18 -- # NVME_HOSTID=2d843004-a791-47f3-8dd7-3d04462c368b 00:21:15.945 15:01:49 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:15.945 15:01:49 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:15.945 15:01:49 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:21:15.945 15:01:49 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:21:15.945 15:01:49 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:15.945 15:01:49 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:15.945 15:01:49 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:15.945 15:01:49 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:15.945 15:01:49 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:15.945 15:01:49 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:15.945 15:01:49 -- paths/export.sh@5 -- # export PATH 00:21:15.945 15:01:49 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:15.945 15:01:49 -- nvmf/common.sh@46 -- # : 0 00:21:15.945 15:01:49 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:21:15.945 15:01:49 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:21:15.945 15:01:49 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:21:15.945 15:01:49 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:15.945 15:01:49 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:15.945 15:01:49 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:21:15.945 15:01:49 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:21:15.945 15:01:49 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:21:15.945 15:01:49 -- host/fio.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:21:15.945 15:01:49 -- host/fio.sh@14 -- # nvmftestinit 00:21:15.945 15:01:49 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:21:15.945 15:01:49 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:15.945 15:01:49 -- nvmf/common.sh@436 -- # prepare_net_devs 00:21:15.945 15:01:49 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:21:15.945 15:01:49 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:21:15.945 15:01:49 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:15.945 15:01:49 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:15.945 15:01:49 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:15.945 15:01:49 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:21:15.945 15:01:49 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:21:15.945 15:01:49 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:21:15.945 15:01:49 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:21:15.945 15:01:49 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:21:15.945 15:01:49 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:21:15.945 15:01:49 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:15.945 15:01:49 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:15.945 15:01:49 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:21:15.945 15:01:49 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:21:15.945 15:01:49 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:21:15.945 15:01:49 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:21:15.945 15:01:49 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:21:15.945 15:01:49 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:15.945 15:01:49 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:21:15.945 15:01:49 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:21:15.945 15:01:49 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:21:15.945 15:01:49 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:21:15.945 15:01:49 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:21:15.945 15:01:49 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:21:15.945 Cannot find device "nvmf_tgt_br" 00:21:15.945 15:01:49 -- nvmf/common.sh@154 -- # true 00:21:15.945 15:01:49 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:21:16.203 Cannot find device "nvmf_tgt_br2" 00:21:16.203 15:01:49 -- nvmf/common.sh@155 -- # true 00:21:16.203 15:01:49 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:21:16.203 15:01:49 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:21:16.203 Cannot find device "nvmf_tgt_br" 00:21:16.203 15:01:49 -- nvmf/common.sh@157 -- # true 00:21:16.203 15:01:49 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:21:16.203 Cannot find device "nvmf_tgt_br2" 00:21:16.203 15:01:49 -- nvmf/common.sh@158 -- # true 00:21:16.203 15:01:49 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:21:16.203 15:01:49 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:21:16.203 15:01:49 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:21:16.203 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:21:16.203 15:01:49 -- nvmf/common.sh@161 -- # true 00:21:16.203 15:01:49 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:21:16.203 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:21:16.203 15:01:49 -- nvmf/common.sh@162 -- # true 00:21:16.203 15:01:49 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:21:16.203 15:01:49 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:21:16.203 15:01:49 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:21:16.203 15:01:49 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:21:16.203 15:01:49 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:21:16.203 15:01:49 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:21:16.203 15:01:49 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:21:16.203 15:01:49 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:21:16.203 15:01:49 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:21:16.203 15:01:49 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:21:16.203 15:01:49 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:21:16.203 15:01:49 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:21:16.203 15:01:49 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:21:16.203 15:01:49 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:21:16.203 15:01:49 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:21:16.203 15:01:49 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:21:16.203 15:01:49 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:21:16.203 15:01:49 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:21:16.203 15:01:49 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:21:16.203 15:01:49 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:21:16.203 15:01:49 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:21:16.462 15:01:49 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:21:16.462 15:01:49 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:21:16.462 15:01:49 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:21:16.462 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:16.462 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.080 ms 00:21:16.462 00:21:16.462 --- 10.0.0.2 ping statistics --- 00:21:16.462 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:16.462 rtt min/avg/max/mdev = 0.080/0.080/0.080/0.000 ms 00:21:16.462 15:01:49 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:21:16.462 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:21:16.462 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.046 ms 00:21:16.462 00:21:16.462 --- 10.0.0.3 ping statistics --- 00:21:16.462 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:16.462 rtt min/avg/max/mdev = 0.046/0.046/0.046/0.000 ms 00:21:16.462 15:01:49 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:21:16.462 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:16.462 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.091 ms 00:21:16.462 00:21:16.462 --- 10.0.0.1 ping statistics --- 00:21:16.462 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:16.462 rtt min/avg/max/mdev = 0.091/0.091/0.091/0.000 ms 00:21:16.462 15:01:49 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:16.462 15:01:49 -- nvmf/common.sh@421 -- # return 0 00:21:16.462 15:01:49 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:21:16.462 15:01:49 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:16.462 15:01:49 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:21:16.462 15:01:49 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:21:16.462 15:01:49 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:16.462 15:01:49 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:21:16.462 15:01:49 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:21:16.462 15:01:49 -- host/fio.sh@16 -- # [[ y != y ]] 00:21:16.462 15:01:49 -- host/fio.sh@21 -- # timing_enter start_nvmf_tgt 00:21:16.462 15:01:49 -- common/autotest_common.sh@722 -- # xtrace_disable 00:21:16.462 15:01:49 -- common/autotest_common.sh@10 -- # set +x 00:21:16.462 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:16.462 15:01:49 -- host/fio.sh@24 -- # nvmfpid=94785 00:21:16.462 15:01:49 -- host/fio.sh@23 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:21:16.462 15:01:49 -- host/fio.sh@26 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:21:16.462 15:01:49 -- host/fio.sh@28 -- # waitforlisten 94785 00:21:16.462 15:01:49 -- common/autotest_common.sh@829 -- # '[' -z 94785 ']' 00:21:16.462 15:01:49 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:16.462 15:01:49 -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:16.462 15:01:49 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:16.462 15:01:49 -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:16.462 15:01:49 -- common/autotest_common.sh@10 -- # set +x 00:21:16.462 [2024-12-01 15:01:49.434042] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:21:16.462 [2024-12-01 15:01:49.434315] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:16.722 [2024-12-01 15:01:49.578736] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:21:16.722 [2024-12-01 15:01:49.648140] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:21:16.722 [2024-12-01 15:01:49.648634] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:16.722 [2024-12-01 15:01:49.648809] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:16.722 [2024-12-01 15:01:49.649034] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:16.722 [2024-12-01 15:01:49.649439] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:21:16.722 [2024-12-01 15:01:49.649516] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:21:16.722 [2024-12-01 15:01:49.649598] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:21:16.722 [2024-12-01 15:01:49.649602] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:21:17.658 15:01:50 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:17.658 15:01:50 -- common/autotest_common.sh@862 -- # return 0 00:21:17.658 15:01:50 -- host/fio.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:21:17.658 [2024-12-01 15:01:50.711642] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:17.658 15:01:50 -- host/fio.sh@30 -- # timing_exit start_nvmf_tgt 00:21:17.658 15:01:50 -- common/autotest_common.sh@728 -- # xtrace_disable 00:21:17.658 15:01:50 -- common/autotest_common.sh@10 -- # set +x 00:21:17.918 15:01:50 -- host/fio.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:21:17.918 Malloc1 00:21:17.918 15:01:51 -- host/fio.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:21:18.177 15:01:51 -- host/fio.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:21:18.436 15:01:51 -- host/fio.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:18.695 [2024-12-01 15:01:51.658992] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:18.695 15:01:51 -- host/fio.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:21:18.954 15:01:51 -- host/fio.sh@38 -- # PLUGIN_DIR=/home/vagrant/spdk_repo/spdk/app/fio/nvme 00:21:18.954 15:01:51 -- host/fio.sh@41 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:21:18.954 15:01:51 -- common/autotest_common.sh@1349 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:21:18.954 15:01:51 -- common/autotest_common.sh@1326 -- # local fio_dir=/usr/src/fio 00:21:18.954 15:01:51 -- common/autotest_common.sh@1328 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:21:18.954 15:01:51 -- common/autotest_common.sh@1328 -- # local sanitizers 00:21:18.954 15:01:51 -- common/autotest_common.sh@1329 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:21:18.954 15:01:51 -- common/autotest_common.sh@1330 -- # shift 00:21:18.954 15:01:51 -- common/autotest_common.sh@1332 -- # local asan_lib= 00:21:18.954 15:01:51 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:21:18.954 15:01:51 -- common/autotest_common.sh@1334 -- # grep libasan 00:21:18.954 15:01:51 -- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:21:18.954 15:01:51 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:21:18.954 15:01:51 -- common/autotest_common.sh@1334 -- # asan_lib= 00:21:18.954 15:01:51 -- common/autotest_common.sh@1335 -- # [[ -n '' ]] 00:21:18.954 15:01:51 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:21:18.954 15:01:51 -- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:21:18.954 15:01:51 -- common/autotest_common.sh@1334 -- # grep libclang_rt.asan 00:21:18.954 15:01:51 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:21:18.954 15:01:51 -- common/autotest_common.sh@1334 -- # asan_lib= 00:21:18.954 15:01:51 -- common/autotest_common.sh@1335 -- # [[ -n '' ]] 00:21:18.954 15:01:51 -- common/autotest_common.sh@1341 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:21:18.954 15:01:51 -- common/autotest_common.sh@1341 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:21:19.213 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:21:19.213 fio-3.35 00:21:19.213 Starting 1 thread 00:21:21.747 00:21:21.747 test: (groupid=0, jobs=1): err= 0: pid=94911: Sun Dec 1 15:01:54 2024 00:21:21.747 read: IOPS=10.4k, BW=40.7MiB/s (42.7MB/s)(81.6MiB/2005msec) 00:21:21.747 slat (nsec): min=1698, max=364095, avg=2318.66, stdev=3486.55 00:21:21.747 clat (usec): min=3308, max=12748, avg=6507.06, stdev=660.57 00:21:21.747 lat (usec): min=3370, max=12751, avg=6509.38, stdev=660.62 00:21:21.747 clat percentiles (usec): 00:21:21.747 | 1.00th=[ 5342], 5.00th=[ 5669], 10.00th=[ 5800], 20.00th=[ 5997], 00:21:21.747 | 30.00th=[ 6194], 40.00th=[ 6325], 50.00th=[ 6456], 60.00th=[ 6587], 00:21:21.747 | 70.00th=[ 6718], 80.00th=[ 6980], 90.00th=[ 7242], 95.00th=[ 7504], 00:21:21.747 | 99.00th=[ 8455], 99.50th=[ 9372], 99.90th=[11994], 99.95th=[12387], 00:21:21.747 | 99.99th=[12649] 00:21:21.747 bw ( KiB/s): min=39456, max=43216, per=99.90%, avg=41652.00, stdev=1616.73, samples=4 00:21:21.747 iops : min= 9864, max=10804, avg=10413.00, stdev=404.18, samples=4 00:21:21.747 write: IOPS=10.4k, BW=40.7MiB/s (42.7MB/s)(81.7MiB/2005msec); 0 zone resets 00:21:21.747 slat (nsec): min=1761, max=280139, avg=2384.19, stdev=2458.53 00:21:21.747 clat (usec): min=2519, max=10405, avg=5715.12, stdev=511.29 00:21:21.747 lat (usec): min=2533, max=10407, avg=5717.50, stdev=511.36 00:21:21.747 clat percentiles (usec): 00:21:21.747 | 1.00th=[ 4686], 5.00th=[ 5014], 10.00th=[ 5145], 20.00th=[ 5342], 00:21:21.747 | 30.00th=[ 5473], 40.00th=[ 5604], 50.00th=[ 5669], 60.00th=[ 5800], 00:21:21.747 | 70.00th=[ 5932], 80.00th=[ 6063], 90.00th=[ 6325], 95.00th=[ 6587], 00:21:21.747 | 99.00th=[ 7177], 99.50th=[ 7504], 99.90th=[ 9241], 99.95th=[ 9896], 00:21:21.747 | 99.99th=[10290] 00:21:21.747 bw ( KiB/s): min=39936, max=42880, per=99.99%, avg=41696.00, stdev=1247.11, samples=4 00:21:21.747 iops : min= 9984, max=10720, avg=10424.00, stdev=311.78, samples=4 00:21:21.747 lat (msec) : 4=0.08%, 10=99.72%, 20=0.20% 00:21:21.747 cpu : usr=68.76%, sys=22.31%, ctx=9, majf=0, minf=5 00:21:21.747 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:21:21.747 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:21.747 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:21:21.747 issued rwts: total=20898,20903,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:21.747 latency : target=0, window=0, percentile=100.00%, depth=128 00:21:21.747 00:21:21.747 Run status group 0 (all jobs): 00:21:21.747 READ: bw=40.7MiB/s (42.7MB/s), 40.7MiB/s-40.7MiB/s (42.7MB/s-42.7MB/s), io=81.6MiB (85.6MB), run=2005-2005msec 00:21:21.747 WRITE: bw=40.7MiB/s (42.7MB/s), 40.7MiB/s-40.7MiB/s (42.7MB/s-42.7MB/s), io=81.7MiB (85.6MB), run=2005-2005msec 00:21:21.747 15:01:54 -- host/fio.sh@45 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:21:21.747 15:01:54 -- common/autotest_common.sh@1349 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:21:21.747 15:01:54 -- common/autotest_common.sh@1326 -- # local fio_dir=/usr/src/fio 00:21:21.747 15:01:54 -- common/autotest_common.sh@1328 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:21:21.747 15:01:54 -- common/autotest_common.sh@1328 -- # local sanitizers 00:21:21.747 15:01:54 -- common/autotest_common.sh@1329 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:21:21.747 15:01:54 -- common/autotest_common.sh@1330 -- # shift 00:21:21.747 15:01:54 -- common/autotest_common.sh@1332 -- # local asan_lib= 00:21:21.747 15:01:54 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:21:21.747 15:01:54 -- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:21:21.747 15:01:54 -- common/autotest_common.sh@1334 -- # grep libasan 00:21:21.747 15:01:54 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:21:21.747 15:01:54 -- common/autotest_common.sh@1334 -- # asan_lib= 00:21:21.747 15:01:54 -- common/autotest_common.sh@1335 -- # [[ -n '' ]] 00:21:21.747 15:01:54 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:21:21.747 15:01:54 -- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:21:21.747 15:01:54 -- common/autotest_common.sh@1334 -- # grep libclang_rt.asan 00:21:21.747 15:01:54 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:21:21.747 15:01:54 -- common/autotest_common.sh@1334 -- # asan_lib= 00:21:21.747 15:01:54 -- common/autotest_common.sh@1335 -- # [[ -n '' ]] 00:21:21.747 15:01:54 -- common/autotest_common.sh@1341 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:21:21.747 15:01:54 -- common/autotest_common.sh@1341 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:21:21.747 test: (g=0): rw=randrw, bs=(R) 16.0KiB-16.0KiB, (W) 16.0KiB-16.0KiB, (T) 16.0KiB-16.0KiB, ioengine=spdk, iodepth=128 00:21:21.747 fio-3.35 00:21:21.747 Starting 1 thread 00:21:24.332 00:21:24.332 test: (groupid=0, jobs=1): err= 0: pid=94962: Sun Dec 1 15:01:56 2024 00:21:24.332 read: IOPS=8595, BW=134MiB/s (141MB/s)(270MiB/2007msec) 00:21:24.332 slat (usec): min=2, max=586, avg= 3.57, stdev= 5.05 00:21:24.332 clat (usec): min=2004, max=16843, avg=8909.81, stdev=2071.43 00:21:24.332 lat (usec): min=2007, max=16848, avg=8913.38, stdev=2071.57 00:21:24.332 clat percentiles (usec): 00:21:24.332 | 1.00th=[ 4883], 5.00th=[ 5800], 10.00th=[ 6325], 20.00th=[ 7046], 00:21:24.332 | 30.00th=[ 7635], 40.00th=[ 8291], 50.00th=[ 8848], 60.00th=[ 9503], 00:21:24.332 | 70.00th=[10028], 80.00th=[10421], 90.00th=[11469], 95.00th=[12649], 00:21:24.332 | 99.00th=[14484], 99.50th=[15008], 99.90th=[15926], 99.95th=[16319], 00:21:24.332 | 99.99th=[16712] 00:21:24.332 bw ( KiB/s): min=65632, max=80032, per=51.39%, avg=70672.00, stdev=6657.31, samples=4 00:21:24.332 iops : min= 4102, max= 5002, avg=4417.00, stdev=416.08, samples=4 00:21:24.332 write: IOPS=5128, BW=80.1MiB/s (84.0MB/s)(144MiB/1796msec); 0 zone resets 00:21:24.332 slat (usec): min=29, max=353, avg=34.77, stdev= 9.77 00:21:24.332 clat (usec): min=3661, max=18036, avg=10331.04, stdev=1727.01 00:21:24.332 lat (usec): min=3691, max=18084, avg=10365.81, stdev=1729.51 00:21:24.332 clat percentiles (usec): 00:21:24.332 | 1.00th=[ 6915], 5.00th=[ 7898], 10.00th=[ 8356], 20.00th=[ 8848], 00:21:24.332 | 30.00th=[ 9372], 40.00th=[ 9765], 50.00th=[10159], 60.00th=[10552], 00:21:24.332 | 70.00th=[10945], 80.00th=[11600], 90.00th=[12649], 95.00th=[13566], 00:21:24.332 | 99.00th=[15270], 99.50th=[15664], 99.90th=[16057], 99.95th=[16057], 00:21:24.332 | 99.99th=[17957] 00:21:24.332 bw ( KiB/s): min=67392, max=82176, per=89.45%, avg=73392.00, stdev=6546.09, samples=4 00:21:24.332 iops : min= 4212, max= 5136, avg=4587.00, stdev=409.13, samples=4 00:21:24.332 lat (msec) : 4=0.23%, 10=60.93%, 20=38.84% 00:21:24.332 cpu : usr=63.91%, sys=22.53%, ctx=16, majf=0, minf=1 00:21:24.332 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.8% 00:21:24.332 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:24.332 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:21:24.332 issued rwts: total=17251,9210,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:24.332 latency : target=0, window=0, percentile=100.00%, depth=128 00:21:24.332 00:21:24.332 Run status group 0 (all jobs): 00:21:24.332 READ: bw=134MiB/s (141MB/s), 134MiB/s-134MiB/s (141MB/s-141MB/s), io=270MiB (283MB), run=2007-2007msec 00:21:24.332 WRITE: bw=80.1MiB/s (84.0MB/s), 80.1MiB/s-80.1MiB/s (84.0MB/s-84.0MB/s), io=144MiB (151MB), run=1796-1796msec 00:21:24.332 15:01:56 -- host/fio.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:21:24.332 15:01:57 -- host/fio.sh@49 -- # '[' 1 -eq 1 ']' 00:21:24.332 15:01:57 -- host/fio.sh@51 -- # bdfs=($(get_nvme_bdfs)) 00:21:24.332 15:01:57 -- host/fio.sh@51 -- # get_nvme_bdfs 00:21:24.332 15:01:57 -- common/autotest_common.sh@1508 -- # bdfs=() 00:21:24.332 15:01:57 -- common/autotest_common.sh@1508 -- # local bdfs 00:21:24.332 15:01:57 -- common/autotest_common.sh@1509 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:21:24.332 15:01:57 -- common/autotest_common.sh@1509 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:21:24.332 15:01:57 -- common/autotest_common.sh@1509 -- # jq -r '.config[].params.traddr' 00:21:24.332 15:01:57 -- common/autotest_common.sh@1510 -- # (( 2 == 0 )) 00:21:24.332 15:01:57 -- common/autotest_common.sh@1514 -- # printf '%s\n' 0000:00:06.0 0000:00:07.0 00:21:24.332 15:01:57 -- host/fio.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:00:06.0 -i 10.0.0.2 00:21:24.591 Nvme0n1 00:21:24.591 15:01:57 -- host/fio.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore -c 1073741824 Nvme0n1 lvs_0 00:21:24.851 15:01:57 -- host/fio.sh@53 -- # ls_guid=cb923719-3c95-4c6f-a884-f4441b4d8f19 00:21:24.851 15:01:57 -- host/fio.sh@54 -- # get_lvs_free_mb cb923719-3c95-4c6f-a884-f4441b4d8f19 00:21:24.851 15:01:57 -- common/autotest_common.sh@1353 -- # local lvs_uuid=cb923719-3c95-4c6f-a884-f4441b4d8f19 00:21:24.851 15:01:57 -- common/autotest_common.sh@1354 -- # local lvs_info 00:21:24.851 15:01:57 -- common/autotest_common.sh@1355 -- # local fc 00:21:24.851 15:01:57 -- common/autotest_common.sh@1356 -- # local cs 00:21:24.851 15:01:57 -- common/autotest_common.sh@1357 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:21:25.110 15:01:58 -- common/autotest_common.sh@1357 -- # lvs_info='[ 00:21:25.110 { 00:21:25.110 "base_bdev": "Nvme0n1", 00:21:25.110 "block_size": 4096, 00:21:25.110 "cluster_size": 1073741824, 00:21:25.110 "free_clusters": 4, 00:21:25.110 "name": "lvs_0", 00:21:25.110 "total_data_clusters": 4, 00:21:25.110 "uuid": "cb923719-3c95-4c6f-a884-f4441b4d8f19" 00:21:25.110 } 00:21:25.110 ]' 00:21:25.110 15:01:58 -- common/autotest_common.sh@1358 -- # jq '.[] | select(.uuid=="cb923719-3c95-4c6f-a884-f4441b4d8f19") .free_clusters' 00:21:25.110 15:01:58 -- common/autotest_common.sh@1358 -- # fc=4 00:21:25.110 15:01:58 -- common/autotest_common.sh@1359 -- # jq '.[] | select(.uuid=="cb923719-3c95-4c6f-a884-f4441b4d8f19") .cluster_size' 00:21:25.110 4096 00:21:25.110 15:01:58 -- common/autotest_common.sh@1359 -- # cs=1073741824 00:21:25.110 15:01:58 -- common/autotest_common.sh@1362 -- # free_mb=4096 00:21:25.110 15:01:58 -- common/autotest_common.sh@1363 -- # echo 4096 00:21:25.111 15:01:58 -- host/fio.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -l lvs_0 lbd_0 4096 00:21:25.370 2e4bb225-2027-44fe-b359-eeb32c746cac 00:21:25.370 15:01:58 -- host/fio.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000001 00:21:25.629 15:01:58 -- host/fio.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 lvs_0/lbd_0 00:21:25.888 15:01:58 -- host/fio.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:21:26.147 15:01:59 -- host/fio.sh@59 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:21:26.147 15:01:59 -- common/autotest_common.sh@1349 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:21:26.147 15:01:59 -- common/autotest_common.sh@1326 -- # local fio_dir=/usr/src/fio 00:21:26.147 15:01:59 -- common/autotest_common.sh@1328 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:21:26.147 15:01:59 -- common/autotest_common.sh@1328 -- # local sanitizers 00:21:26.147 15:01:59 -- common/autotest_common.sh@1329 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:21:26.147 15:01:59 -- common/autotest_common.sh@1330 -- # shift 00:21:26.147 15:01:59 -- common/autotest_common.sh@1332 -- # local asan_lib= 00:21:26.147 15:01:59 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:21:26.147 15:01:59 -- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:21:26.147 15:01:59 -- common/autotest_common.sh@1334 -- # grep libasan 00:21:26.147 15:01:59 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:21:26.147 15:01:59 -- common/autotest_common.sh@1334 -- # asan_lib= 00:21:26.147 15:01:59 -- common/autotest_common.sh@1335 -- # [[ -n '' ]] 00:21:26.147 15:01:59 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:21:26.147 15:01:59 -- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:21:26.147 15:01:59 -- common/autotest_common.sh@1334 -- # grep libclang_rt.asan 00:21:26.147 15:01:59 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:21:26.147 15:01:59 -- common/autotest_common.sh@1334 -- # asan_lib= 00:21:26.147 15:01:59 -- common/autotest_common.sh@1335 -- # [[ -n '' ]] 00:21:26.147 15:01:59 -- common/autotest_common.sh@1341 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:21:26.147 15:01:59 -- common/autotest_common.sh@1341 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:21:26.406 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:21:26.406 fio-3.35 00:21:26.406 Starting 1 thread 00:21:28.940 00:21:28.940 test: (groupid=0, jobs=1): err= 0: pid=95116: Sun Dec 1 15:02:01 2024 00:21:28.940 read: IOPS=6250, BW=24.4MiB/s (25.6MB/s)(49.0MiB/2008msec) 00:21:28.940 slat (nsec): min=1839, max=356831, avg=2941.85, stdev=4918.26 00:21:28.940 clat (usec): min=4500, max=20119, avg=10866.87, stdev=1053.16 00:21:28.940 lat (usec): min=4510, max=20122, avg=10869.81, stdev=1052.98 00:21:28.940 clat percentiles (usec): 00:21:28.940 | 1.00th=[ 8586], 5.00th=[ 9241], 10.00th=[ 9634], 20.00th=[10028], 00:21:28.940 | 30.00th=[10290], 40.00th=[10552], 50.00th=[10814], 60.00th=[11076], 00:21:28.940 | 70.00th=[11338], 80.00th=[11600], 90.00th=[12125], 95.00th=[12518], 00:21:28.940 | 99.00th=[13435], 99.50th=[13829], 99.90th=[18220], 99.95th=[19268], 00:21:28.940 | 99.99th=[20055] 00:21:28.940 bw ( KiB/s): min=23960, max=25368, per=99.81%, avg=24956.00, stdev=671.79, samples=4 00:21:28.940 iops : min= 5990, max= 6342, avg=6239.00, stdev=167.95, samples=4 00:21:28.940 write: IOPS=6242, BW=24.4MiB/s (25.6MB/s)(49.0MiB/2008msec); 0 zone resets 00:21:28.940 slat (nsec): min=1888, max=291901, avg=2959.06, stdev=3820.45 00:21:28.940 clat (usec): min=2615, max=17898, avg=9545.09, stdev=874.64 00:21:28.940 lat (usec): min=2629, max=17900, avg=9548.05, stdev=874.53 00:21:28.940 clat percentiles (usec): 00:21:28.940 | 1.00th=[ 7635], 5.00th=[ 8225], 10.00th=[ 8455], 20.00th=[ 8848], 00:21:28.940 | 30.00th=[ 9110], 40.00th=[ 9372], 50.00th=[ 9503], 60.00th=[ 9765], 00:21:28.940 | 70.00th=[10028], 80.00th=[10290], 90.00th=[10552], 95.00th=[10814], 00:21:28.940 | 99.00th=[11469], 99.50th=[11731], 99.90th=[15664], 99.95th=[16057], 00:21:28.940 | 99.99th=[17171] 00:21:28.940 bw ( KiB/s): min=24832, max=25152, per=99.98%, avg=24962.00, stdev=138.31, samples=4 00:21:28.940 iops : min= 6208, max= 6288, avg=6240.50, stdev=34.58, samples=4 00:21:28.940 lat (msec) : 4=0.04%, 10=44.69%, 20=55.27%, 50=0.01% 00:21:28.940 cpu : usr=69.46%, sys=23.17%, ctx=4, majf=0, minf=5 00:21:28.940 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.7% 00:21:28.940 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:28.940 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:21:28.940 issued rwts: total=12552,12534,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:28.940 latency : target=0, window=0, percentile=100.00%, depth=128 00:21:28.940 00:21:28.940 Run status group 0 (all jobs): 00:21:28.940 READ: bw=24.4MiB/s (25.6MB/s), 24.4MiB/s-24.4MiB/s (25.6MB/s-25.6MB/s), io=49.0MiB (51.4MB), run=2008-2008msec 00:21:28.940 WRITE: bw=24.4MiB/s (25.6MB/s), 24.4MiB/s-24.4MiB/s (25.6MB/s-25.6MB/s), io=49.0MiB (51.3MB), run=2008-2008msec 00:21:28.940 15:02:01 -- host/fio.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:21:28.940 15:02:01 -- host/fio.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore --clear-method none lvs_0/lbd_0 lvs_n_0 00:21:29.200 15:02:02 -- host/fio.sh@64 -- # ls_nested_guid=4aa4ff9e-2b6e-4e1f-9aee-6369167af112 00:21:29.200 15:02:02 -- host/fio.sh@65 -- # get_lvs_free_mb 4aa4ff9e-2b6e-4e1f-9aee-6369167af112 00:21:29.200 15:02:02 -- common/autotest_common.sh@1353 -- # local lvs_uuid=4aa4ff9e-2b6e-4e1f-9aee-6369167af112 00:21:29.200 15:02:02 -- common/autotest_common.sh@1354 -- # local lvs_info 00:21:29.200 15:02:02 -- common/autotest_common.sh@1355 -- # local fc 00:21:29.200 15:02:02 -- common/autotest_common.sh@1356 -- # local cs 00:21:29.200 15:02:02 -- common/autotest_common.sh@1357 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:21:29.458 15:02:02 -- common/autotest_common.sh@1357 -- # lvs_info='[ 00:21:29.458 { 00:21:29.458 "base_bdev": "Nvme0n1", 00:21:29.458 "block_size": 4096, 00:21:29.458 "cluster_size": 1073741824, 00:21:29.458 "free_clusters": 0, 00:21:29.458 "name": "lvs_0", 00:21:29.458 "total_data_clusters": 4, 00:21:29.458 "uuid": "cb923719-3c95-4c6f-a884-f4441b4d8f19" 00:21:29.458 }, 00:21:29.458 { 00:21:29.458 "base_bdev": "2e4bb225-2027-44fe-b359-eeb32c746cac", 00:21:29.458 "block_size": 4096, 00:21:29.458 "cluster_size": 4194304, 00:21:29.458 "free_clusters": 1022, 00:21:29.458 "name": "lvs_n_0", 00:21:29.458 "total_data_clusters": 1022, 00:21:29.458 "uuid": "4aa4ff9e-2b6e-4e1f-9aee-6369167af112" 00:21:29.458 } 00:21:29.458 ]' 00:21:29.458 15:02:02 -- common/autotest_common.sh@1358 -- # jq '.[] | select(.uuid=="4aa4ff9e-2b6e-4e1f-9aee-6369167af112") .free_clusters' 00:21:29.458 15:02:02 -- common/autotest_common.sh@1358 -- # fc=1022 00:21:29.458 15:02:02 -- common/autotest_common.sh@1359 -- # jq '.[] | select(.uuid=="4aa4ff9e-2b6e-4e1f-9aee-6369167af112") .cluster_size' 00:21:29.458 4088 00:21:29.458 15:02:02 -- common/autotest_common.sh@1359 -- # cs=4194304 00:21:29.458 15:02:02 -- common/autotest_common.sh@1362 -- # free_mb=4088 00:21:29.458 15:02:02 -- common/autotest_common.sh@1363 -- # echo 4088 00:21:29.458 15:02:02 -- host/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -l lvs_n_0 lbd_nest_0 4088 00:21:29.717 dfe77998-0d7a-4dcb-ba4c-050ffc8bf6da 00:21:29.717 15:02:02 -- host/fio.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000001 00:21:29.976 15:02:02 -- host/fio.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 lvs_n_0/lbd_nest_0 00:21:30.234 15:02:03 -- host/fio.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:21:30.494 15:02:03 -- host/fio.sh@70 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:21:30.494 15:02:03 -- common/autotest_common.sh@1349 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:21:30.494 15:02:03 -- common/autotest_common.sh@1326 -- # local fio_dir=/usr/src/fio 00:21:30.494 15:02:03 -- common/autotest_common.sh@1328 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:21:30.494 15:02:03 -- common/autotest_common.sh@1328 -- # local sanitizers 00:21:30.494 15:02:03 -- common/autotest_common.sh@1329 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:21:30.494 15:02:03 -- common/autotest_common.sh@1330 -- # shift 00:21:30.494 15:02:03 -- common/autotest_common.sh@1332 -- # local asan_lib= 00:21:30.494 15:02:03 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:21:30.494 15:02:03 -- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:21:30.494 15:02:03 -- common/autotest_common.sh@1334 -- # grep libasan 00:21:30.494 15:02:03 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:21:30.494 15:02:03 -- common/autotest_common.sh@1334 -- # asan_lib= 00:21:30.494 15:02:03 -- common/autotest_common.sh@1335 -- # [[ -n '' ]] 00:21:30.494 15:02:03 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:21:30.494 15:02:03 -- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:21:30.494 15:02:03 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:21:30.494 15:02:03 -- common/autotest_common.sh@1334 -- # grep libclang_rt.asan 00:21:30.494 15:02:03 -- common/autotest_common.sh@1334 -- # asan_lib= 00:21:30.494 15:02:03 -- common/autotest_common.sh@1335 -- # [[ -n '' ]] 00:21:30.494 15:02:03 -- common/autotest_common.sh@1341 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:21:30.494 15:02:03 -- common/autotest_common.sh@1341 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:21:30.494 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:21:30.494 fio-3.35 00:21:30.494 Starting 1 thread 00:21:33.028 00:21:33.028 test: (groupid=0, jobs=1): err= 0: pid=95235: Sun Dec 1 15:02:05 2024 00:21:33.028 read: IOPS=6117, BW=23.9MiB/s (25.1MB/s)(48.0MiB/2009msec) 00:21:33.028 slat (nsec): min=1685, max=358409, avg=2947.55, stdev=5194.03 00:21:33.028 clat (usec): min=4535, max=19936, avg=11246.44, stdev=1135.31 00:21:33.028 lat (usec): min=4545, max=19938, avg=11249.38, stdev=1135.08 00:21:33.028 clat percentiles (usec): 00:21:33.028 | 1.00th=[ 8848], 5.00th=[ 9503], 10.00th=[ 9896], 20.00th=[10290], 00:21:33.028 | 30.00th=[10683], 40.00th=[10945], 50.00th=[11207], 60.00th=[11469], 00:21:33.028 | 70.00th=[11731], 80.00th=[12125], 90.00th=[12649], 95.00th=[13042], 00:21:33.028 | 99.00th=[13960], 99.50th=[14353], 99.90th=[18482], 99.95th=[19006], 00:21:33.028 | 99.99th=[19792] 00:21:33.028 bw ( KiB/s): min=23248, max=24936, per=99.89%, avg=24444.00, stdev=800.60, samples=4 00:21:33.028 iops : min= 5812, max= 6234, avg=6111.00, stdev=200.15, samples=4 00:21:33.028 write: IOPS=6099, BW=23.8MiB/s (25.0MB/s)(47.9MiB/2009msec); 0 zone resets 00:21:33.028 slat (nsec): min=1839, max=303622, avg=3174.61, stdev=4526.12 00:21:33.028 clat (usec): min=2593, max=18317, avg=9629.68, stdev=943.91 00:21:33.028 lat (usec): min=2605, max=18319, avg=9632.86, stdev=943.72 00:21:33.028 clat percentiles (usec): 00:21:33.028 | 1.00th=[ 7504], 5.00th=[ 8160], 10.00th=[ 8455], 20.00th=[ 8979], 00:21:33.028 | 30.00th=[ 9241], 40.00th=[ 9372], 50.00th=[ 9634], 60.00th=[ 9896], 00:21:33.028 | 70.00th=[10028], 80.00th=[10421], 90.00th=[10814], 95.00th=[11076], 00:21:33.028 | 99.00th=[11731], 99.50th=[11994], 99.90th=[15533], 99.95th=[16581], 00:21:33.028 | 99.99th=[16909] 00:21:33.028 bw ( KiB/s): min=24280, max=24512, per=99.97%, avg=24390.00, stdev=95.86, samples=4 00:21:33.028 iops : min= 6070, max= 6128, avg=6097.50, stdev=23.97, samples=4 00:21:33.028 lat (msec) : 4=0.04%, 10=39.52%, 20=60.43% 00:21:33.028 cpu : usr=70.22%, sys=22.26%, ctx=9, majf=0, minf=5 00:21:33.028 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.7% 00:21:33.028 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:33.028 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:21:33.028 issued rwts: total=12291,12253,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:33.028 latency : target=0, window=0, percentile=100.00%, depth=128 00:21:33.028 00:21:33.028 Run status group 0 (all jobs): 00:21:33.028 READ: bw=23.9MiB/s (25.1MB/s), 23.9MiB/s-23.9MiB/s (25.1MB/s-25.1MB/s), io=48.0MiB (50.3MB), run=2009-2009msec 00:21:33.028 WRITE: bw=23.8MiB/s (25.0MB/s), 23.8MiB/s-23.8MiB/s (25.0MB/s-25.0MB/s), io=47.9MiB (50.2MB), run=2009-2009msec 00:21:33.028 15:02:05 -- host/fio.sh@72 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:21:33.288 15:02:06 -- host/fio.sh@74 -- # sync 00:21:33.288 15:02:06 -- host/fio.sh@76 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 120 bdev_lvol_delete lvs_n_0/lbd_nest_0 00:21:33.547 15:02:06 -- host/fio.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_n_0 00:21:33.805 15:02:06 -- host/fio.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete lvs_0/lbd_0 00:21:33.805 15:02:06 -- host/fio.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_0 00:21:34.063 15:02:07 -- host/fio.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_detach_controller Nvme0 00:21:35.439 15:02:08 -- host/fio.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:21:35.439 15:02:08 -- host/fio.sh@85 -- # rm -f ./local-test-0-verify.state 00:21:35.439 15:02:08 -- host/fio.sh@86 -- # nvmftestfini 00:21:35.439 15:02:08 -- nvmf/common.sh@476 -- # nvmfcleanup 00:21:35.439 15:02:08 -- nvmf/common.sh@116 -- # sync 00:21:35.439 15:02:08 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:21:35.439 15:02:08 -- nvmf/common.sh@119 -- # set +e 00:21:35.439 15:02:08 -- nvmf/common.sh@120 -- # for i in {1..20} 00:21:35.439 15:02:08 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:21:35.439 rmmod nvme_tcp 00:21:35.439 rmmod nvme_fabrics 00:21:35.439 rmmod nvme_keyring 00:21:35.439 15:02:08 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:21:35.439 15:02:08 -- nvmf/common.sh@123 -- # set -e 00:21:35.439 15:02:08 -- nvmf/common.sh@124 -- # return 0 00:21:35.439 15:02:08 -- nvmf/common.sh@477 -- # '[' -n 94785 ']' 00:21:35.439 15:02:08 -- nvmf/common.sh@478 -- # killprocess 94785 00:21:35.439 15:02:08 -- common/autotest_common.sh@936 -- # '[' -z 94785 ']' 00:21:35.439 15:02:08 -- common/autotest_common.sh@940 -- # kill -0 94785 00:21:35.439 15:02:08 -- common/autotest_common.sh@941 -- # uname 00:21:35.439 15:02:08 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:21:35.439 15:02:08 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 94785 00:21:35.439 15:02:08 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:21:35.439 15:02:08 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:21:35.439 killing process with pid 94785 00:21:35.439 15:02:08 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 94785' 00:21:35.439 15:02:08 -- common/autotest_common.sh@955 -- # kill 94785 00:21:35.439 15:02:08 -- common/autotest_common.sh@960 -- # wait 94785 00:21:35.439 15:02:08 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:21:35.439 15:02:08 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:21:35.439 15:02:08 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:21:35.439 15:02:08 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:21:35.439 15:02:08 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:21:35.439 15:02:08 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:35.439 15:02:08 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:35.439 15:02:08 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:35.439 15:02:08 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:21:35.439 00:21:35.440 real 0m19.735s 00:21:35.440 user 1m26.143s 00:21:35.440 sys 0m4.550s 00:21:35.440 15:02:08 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:21:35.440 15:02:08 -- common/autotest_common.sh@10 -- # set +x 00:21:35.440 ************************************ 00:21:35.440 END TEST nvmf_fio_host 00:21:35.440 ************************************ 00:21:35.700 15:02:08 -- nvmf/nvmf.sh@100 -- # run_test nvmf_failover /home/vagrant/spdk_repo/spdk/test/nvmf/host/failover.sh --transport=tcp 00:21:35.700 15:02:08 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:21:35.700 15:02:08 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:21:35.700 15:02:08 -- common/autotest_common.sh@10 -- # set +x 00:21:35.700 ************************************ 00:21:35.700 START TEST nvmf_failover 00:21:35.700 ************************************ 00:21:35.700 15:02:08 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/failover.sh --transport=tcp 00:21:35.700 * Looking for test storage... 00:21:35.700 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:21:35.700 15:02:08 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:21:35.700 15:02:08 -- common/autotest_common.sh@1690 -- # lcov --version 00:21:35.700 15:02:08 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:21:35.700 15:02:08 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:21:35.700 15:02:08 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:21:35.700 15:02:08 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:21:35.700 15:02:08 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:21:35.700 15:02:08 -- scripts/common.sh@335 -- # IFS=.-: 00:21:35.700 15:02:08 -- scripts/common.sh@335 -- # read -ra ver1 00:21:35.700 15:02:08 -- scripts/common.sh@336 -- # IFS=.-: 00:21:35.700 15:02:08 -- scripts/common.sh@336 -- # read -ra ver2 00:21:35.700 15:02:08 -- scripts/common.sh@337 -- # local 'op=<' 00:21:35.700 15:02:08 -- scripts/common.sh@339 -- # ver1_l=2 00:21:35.700 15:02:08 -- scripts/common.sh@340 -- # ver2_l=1 00:21:35.700 15:02:08 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:21:35.700 15:02:08 -- scripts/common.sh@343 -- # case "$op" in 00:21:35.700 15:02:08 -- scripts/common.sh@344 -- # : 1 00:21:35.700 15:02:08 -- scripts/common.sh@363 -- # (( v = 0 )) 00:21:35.700 15:02:08 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:35.700 15:02:08 -- scripts/common.sh@364 -- # decimal 1 00:21:35.700 15:02:08 -- scripts/common.sh@352 -- # local d=1 00:21:35.700 15:02:08 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:35.700 15:02:08 -- scripts/common.sh@354 -- # echo 1 00:21:35.700 15:02:08 -- scripts/common.sh@364 -- # ver1[v]=1 00:21:35.700 15:02:08 -- scripts/common.sh@365 -- # decimal 2 00:21:35.700 15:02:08 -- scripts/common.sh@352 -- # local d=2 00:21:35.700 15:02:08 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:21:35.700 15:02:08 -- scripts/common.sh@354 -- # echo 2 00:21:35.700 15:02:08 -- scripts/common.sh@365 -- # ver2[v]=2 00:21:35.700 15:02:08 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:21:35.700 15:02:08 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:21:35.700 15:02:08 -- scripts/common.sh@367 -- # return 0 00:21:35.700 15:02:08 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:35.700 15:02:08 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:21:35.700 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:35.700 --rc genhtml_branch_coverage=1 00:21:35.700 --rc genhtml_function_coverage=1 00:21:35.700 --rc genhtml_legend=1 00:21:35.700 --rc geninfo_all_blocks=1 00:21:35.700 --rc geninfo_unexecuted_blocks=1 00:21:35.700 00:21:35.700 ' 00:21:35.700 15:02:08 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:21:35.700 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:35.700 --rc genhtml_branch_coverage=1 00:21:35.700 --rc genhtml_function_coverage=1 00:21:35.700 --rc genhtml_legend=1 00:21:35.700 --rc geninfo_all_blocks=1 00:21:35.700 --rc geninfo_unexecuted_blocks=1 00:21:35.700 00:21:35.700 ' 00:21:35.700 15:02:08 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:21:35.700 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:35.700 --rc genhtml_branch_coverage=1 00:21:35.700 --rc genhtml_function_coverage=1 00:21:35.700 --rc genhtml_legend=1 00:21:35.700 --rc geninfo_all_blocks=1 00:21:35.700 --rc geninfo_unexecuted_blocks=1 00:21:35.700 00:21:35.700 ' 00:21:35.700 15:02:08 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:21:35.700 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:35.700 --rc genhtml_branch_coverage=1 00:21:35.700 --rc genhtml_function_coverage=1 00:21:35.700 --rc genhtml_legend=1 00:21:35.700 --rc geninfo_all_blocks=1 00:21:35.700 --rc geninfo_unexecuted_blocks=1 00:21:35.700 00:21:35.700 ' 00:21:35.700 15:02:08 -- host/failover.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:21:35.700 15:02:08 -- nvmf/common.sh@7 -- # uname -s 00:21:35.700 15:02:08 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:35.700 15:02:08 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:35.700 15:02:08 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:35.700 15:02:08 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:35.700 15:02:08 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:35.700 15:02:08 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:35.700 15:02:08 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:35.700 15:02:08 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:35.700 15:02:08 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:35.700 15:02:08 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:35.700 15:02:08 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:2d843004-a791-47f3-8dd7-3d04462c368b 00:21:35.700 15:02:08 -- nvmf/common.sh@18 -- # NVME_HOSTID=2d843004-a791-47f3-8dd7-3d04462c368b 00:21:35.700 15:02:08 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:35.700 15:02:08 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:35.700 15:02:08 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:21:35.700 15:02:08 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:21:35.700 15:02:08 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:35.700 15:02:08 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:35.700 15:02:08 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:35.700 15:02:08 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:35.700 15:02:08 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:35.701 15:02:08 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:35.701 15:02:08 -- paths/export.sh@5 -- # export PATH 00:21:35.701 15:02:08 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:35.701 15:02:08 -- nvmf/common.sh@46 -- # : 0 00:21:35.701 15:02:08 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:21:35.701 15:02:08 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:21:35.701 15:02:08 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:21:35.701 15:02:08 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:35.701 15:02:08 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:35.701 15:02:08 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:21:35.701 15:02:08 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:21:35.701 15:02:08 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:21:35.701 15:02:08 -- host/failover.sh@11 -- # MALLOC_BDEV_SIZE=64 00:21:35.701 15:02:08 -- host/failover.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:21:35.701 15:02:08 -- host/failover.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:21:35.701 15:02:08 -- host/failover.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:21:35.701 15:02:08 -- host/failover.sh@18 -- # nvmftestinit 00:21:35.701 15:02:08 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:21:35.701 15:02:08 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:35.701 15:02:08 -- nvmf/common.sh@436 -- # prepare_net_devs 00:21:35.701 15:02:08 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:21:35.701 15:02:08 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:21:35.701 15:02:08 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:35.701 15:02:08 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:35.701 15:02:08 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:35.701 15:02:08 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:21:35.701 15:02:08 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:21:35.701 15:02:08 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:21:35.701 15:02:08 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:21:35.701 15:02:08 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:21:35.701 15:02:08 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:21:35.701 15:02:08 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:35.701 15:02:08 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:35.701 15:02:08 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:21:35.701 15:02:08 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:21:35.701 15:02:08 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:21:35.701 15:02:08 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:21:35.701 15:02:08 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:21:35.701 15:02:08 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:35.701 15:02:08 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:21:35.701 15:02:08 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:21:35.701 15:02:08 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:21:35.701 15:02:08 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:21:35.701 15:02:08 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:21:35.701 15:02:08 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:21:35.701 Cannot find device "nvmf_tgt_br" 00:21:35.701 15:02:08 -- nvmf/common.sh@154 -- # true 00:21:35.701 15:02:08 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:21:35.960 Cannot find device "nvmf_tgt_br2" 00:21:35.960 15:02:08 -- nvmf/common.sh@155 -- # true 00:21:35.960 15:02:08 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:21:35.960 15:02:08 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:21:35.960 Cannot find device "nvmf_tgt_br" 00:21:35.960 15:02:08 -- nvmf/common.sh@157 -- # true 00:21:35.960 15:02:08 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:21:35.960 Cannot find device "nvmf_tgt_br2" 00:21:35.960 15:02:08 -- nvmf/common.sh@158 -- # true 00:21:35.960 15:02:08 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:21:35.960 15:02:08 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:21:35.960 15:02:08 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:21:35.960 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:21:35.960 15:02:08 -- nvmf/common.sh@161 -- # true 00:21:35.960 15:02:08 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:21:35.960 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:21:35.960 15:02:08 -- nvmf/common.sh@162 -- # true 00:21:35.960 15:02:08 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:21:35.960 15:02:08 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:21:35.960 15:02:08 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:21:35.960 15:02:08 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:21:35.960 15:02:08 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:21:35.960 15:02:08 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:21:35.960 15:02:08 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:21:35.960 15:02:08 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:21:35.960 15:02:08 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:21:35.960 15:02:08 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:21:35.960 15:02:08 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:21:35.960 15:02:08 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:21:35.960 15:02:08 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:21:35.960 15:02:08 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:21:35.960 15:02:08 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:21:35.960 15:02:09 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:21:35.960 15:02:09 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:21:35.960 15:02:09 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:21:35.960 15:02:09 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:21:35.960 15:02:09 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:21:35.960 15:02:09 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:21:35.960 15:02:09 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:21:35.960 15:02:09 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:21:35.960 15:02:09 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:21:36.217 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:36.217 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.090 ms 00:21:36.217 00:21:36.217 --- 10.0.0.2 ping statistics --- 00:21:36.217 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:36.217 rtt min/avg/max/mdev = 0.090/0.090/0.090/0.000 ms 00:21:36.217 15:02:09 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:21:36.217 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:21:36.217 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.040 ms 00:21:36.217 00:21:36.217 --- 10.0.0.3 ping statistics --- 00:21:36.217 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:36.217 rtt min/avg/max/mdev = 0.040/0.040/0.040/0.000 ms 00:21:36.217 15:02:09 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:21:36.217 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:36.217 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.022 ms 00:21:36.217 00:21:36.217 --- 10.0.0.1 ping statistics --- 00:21:36.217 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:36.217 rtt min/avg/max/mdev = 0.022/0.022/0.022/0.000 ms 00:21:36.217 15:02:09 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:36.217 15:02:09 -- nvmf/common.sh@421 -- # return 0 00:21:36.217 15:02:09 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:21:36.217 15:02:09 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:36.217 15:02:09 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:21:36.217 15:02:09 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:21:36.217 15:02:09 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:36.217 15:02:09 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:21:36.217 15:02:09 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:21:36.217 15:02:09 -- host/failover.sh@20 -- # nvmfappstart -m 0xE 00:21:36.218 15:02:09 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:21:36.218 15:02:09 -- common/autotest_common.sh@722 -- # xtrace_disable 00:21:36.218 15:02:09 -- common/autotest_common.sh@10 -- # set +x 00:21:36.218 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:36.218 15:02:09 -- nvmf/common.sh@469 -- # nvmfpid=95516 00:21:36.218 15:02:09 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:21:36.218 15:02:09 -- nvmf/common.sh@470 -- # waitforlisten 95516 00:21:36.218 15:02:09 -- common/autotest_common.sh@829 -- # '[' -z 95516 ']' 00:21:36.218 15:02:09 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:36.218 15:02:09 -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:36.218 15:02:09 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:36.218 15:02:09 -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:36.218 15:02:09 -- common/autotest_common.sh@10 -- # set +x 00:21:36.218 [2024-12-01 15:02:09.174911] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:21:36.218 [2024-12-01 15:02:09.175001] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:36.218 [2024-12-01 15:02:09.317932] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:21:36.476 [2024-12-01 15:02:09.402375] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:21:36.476 [2024-12-01 15:02:09.402685] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:36.476 [2024-12-01 15:02:09.402853] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:36.476 [2024-12-01 15:02:09.403020] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:36.476 [2024-12-01 15:02:09.403328] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:21:36.476 [2024-12-01 15:02:09.403430] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:21:36.476 [2024-12-01 15:02:09.403421] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:21:37.408 15:02:10 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:37.408 15:02:10 -- common/autotest_common.sh@862 -- # return 0 00:21:37.408 15:02:10 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:21:37.408 15:02:10 -- common/autotest_common.sh@728 -- # xtrace_disable 00:21:37.408 15:02:10 -- common/autotest_common.sh@10 -- # set +x 00:21:37.408 15:02:10 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:37.408 15:02:10 -- host/failover.sh@22 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:21:37.667 [2024-12-01 15:02:10.530020] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:37.667 15:02:10 -- host/failover.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:21:37.925 Malloc0 00:21:37.925 15:02:10 -- host/failover.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:21:38.184 15:02:11 -- host/failover.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:21:38.442 15:02:11 -- host/failover.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:38.442 [2024-12-01 15:02:11.487306] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:38.442 15:02:11 -- host/failover.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:21:38.701 [2024-12-01 15:02:11.699547] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:21:38.701 15:02:11 -- host/failover.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:21:38.960 [2024-12-01 15:02:11.903894] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:21:38.960 15:02:11 -- host/failover.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 15 -f 00:21:38.960 15:02:11 -- host/failover.sh@31 -- # bdevperf_pid=95629 00:21:38.960 15:02:11 -- host/failover.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; cat $testdir/try.txt; rm -f $testdir/try.txt; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:21:38.960 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:38.960 15:02:11 -- host/failover.sh@34 -- # waitforlisten 95629 /var/tmp/bdevperf.sock 00:21:38.960 15:02:11 -- common/autotest_common.sh@829 -- # '[' -z 95629 ']' 00:21:38.960 15:02:11 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:38.960 15:02:11 -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:38.960 15:02:11 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:38.960 15:02:11 -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:38.960 15:02:11 -- common/autotest_common.sh@10 -- # set +x 00:21:39.897 15:02:12 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:39.897 15:02:12 -- common/autotest_common.sh@862 -- # return 0 00:21:39.897 15:02:12 -- host/failover.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:21:40.156 NVMe0n1 00:21:40.156 15:02:13 -- host/failover.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:21:40.416 00:21:40.416 15:02:13 -- host/failover.sh@38 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:21:40.416 15:02:13 -- host/failover.sh@39 -- # run_test_pid=95682 00:21:40.416 15:02:13 -- host/failover.sh@41 -- # sleep 1 00:21:41.794 15:02:14 -- host/failover.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:41.794 [2024-12-01 15:02:14.766245] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2317c90 is same with the state(5) to be set 00:21:41.794 [2024-12-01 15:02:14.766307] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2317c90 is same with the state(5) to be set 00:21:41.794 [2024-12-01 15:02:14.766318] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2317c90 is same with the state(5) to be set 00:21:41.794 [2024-12-01 15:02:14.766331] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2317c90 is same with the state(5) to be set 00:21:41.794 [2024-12-01 15:02:14.766337] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2317c90 is same with the state(5) to be set 00:21:41.794 [2024-12-01 15:02:14.766344] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2317c90 is same with the state(5) to be set 00:21:41.794 [2024-12-01 15:02:14.766351] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2317c90 is same with the state(5) to be set 00:21:41.794 [2024-12-01 15:02:14.766358] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2317c90 is same with the state(5) to be set 00:21:41.794 [2024-12-01 15:02:14.766364] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2317c90 is same with the state(5) to be set 00:21:41.794 [2024-12-01 15:02:14.766371] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2317c90 is same with the state(5) to be set 00:21:41.794 [2024-12-01 15:02:14.766379] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2317c90 is same with the state(5) to be set 00:21:41.794 [2024-12-01 15:02:14.766385] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2317c90 is same with the state(5) to be set 00:21:41.794 [2024-12-01 15:02:14.766392] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2317c90 is same with the state(5) to be set 00:21:41.794 [2024-12-01 15:02:14.766399] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2317c90 is same with the state(5) to be set 00:21:41.794 [2024-12-01 15:02:14.766406] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2317c90 is same with the state(5) to be set 00:21:41.794 [2024-12-01 15:02:14.766412] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2317c90 is same with the state(5) to be set 00:21:41.794 [2024-12-01 15:02:14.766419] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2317c90 is same with the state(5) to be set 00:21:41.794 [2024-12-01 15:02:14.766426] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2317c90 is same with the state(5) to be set 00:21:41.794 [2024-12-01 15:02:14.766432] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2317c90 is same with the state(5) to be set 00:21:41.794 [2024-12-01 15:02:14.766439] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2317c90 is same with the state(5) to be set 00:21:41.794 [2024-12-01 15:02:14.766446] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2317c90 is same with the state(5) to be set 00:21:41.794 [2024-12-01 15:02:14.766452] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2317c90 is same with the state(5) to be set 00:21:41.794 [2024-12-01 15:02:14.766459] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2317c90 is same with the state(5) to be set 00:21:41.794 [2024-12-01 15:02:14.766465] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2317c90 is same with the state(5) to be set 00:21:41.794 [2024-12-01 15:02:14.766472] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2317c90 is same with the state(5) to be set 00:21:41.794 [2024-12-01 15:02:14.766496] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2317c90 is same with the state(5) to be set 00:21:41.794 [2024-12-01 15:02:14.766503] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2317c90 is same with the state(5) to be set 00:21:41.794 [2024-12-01 15:02:14.766510] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2317c90 is same with the state(5) to be set 00:21:41.794 [2024-12-01 15:02:14.766518] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2317c90 is same with the state(5) to be set 00:21:41.794 [2024-12-01 15:02:14.766525] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2317c90 is same with the state(5) to be set 00:21:41.794 [2024-12-01 15:02:14.766532] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2317c90 is same with the state(5) to be set 00:21:41.794 [2024-12-01 15:02:14.766539] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2317c90 is same with the state(5) to be set 00:21:41.794 [2024-12-01 15:02:14.766546] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2317c90 is same with the state(5) to be set 00:21:41.794 [2024-12-01 15:02:14.766552] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2317c90 is same with the state(5) to be set 00:21:41.794 [2024-12-01 15:02:14.766560] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2317c90 is same with the state(5) to be set 00:21:41.794 [2024-12-01 15:02:14.766571] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2317c90 is same with the state(5) to be set 00:21:41.794 [2024-12-01 15:02:14.766578] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2317c90 is same with the state(5) to be set 00:21:41.794 [2024-12-01 15:02:14.766594] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2317c90 is same with the state(5) to be set 00:21:41.794 [2024-12-01 15:02:14.766601] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2317c90 is same with the state(5) to be set 00:21:41.794 [2024-12-01 15:02:14.766608] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2317c90 is same with the state(5) to be set 00:21:41.794 [2024-12-01 15:02:14.766615] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2317c90 is same with the state(5) to be set 00:21:41.794 [2024-12-01 15:02:14.766622] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2317c90 is same with the state(5) to be set 00:21:41.794 [2024-12-01 15:02:14.766629] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2317c90 is same with the state(5) to be set 00:21:41.794 [2024-12-01 15:02:14.766636] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2317c90 is same with the state(5) to be set 00:21:41.794 [2024-12-01 15:02:14.766643] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2317c90 is same with the state(5) to be set 00:21:41.794 [2024-12-01 15:02:14.766650] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2317c90 is same with the state(5) to be set 00:21:41.794 [2024-12-01 15:02:14.766657] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2317c90 is same with the state(5) to be set 00:21:41.794 [2024-12-01 15:02:14.766664] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2317c90 is same with the state(5) to be set 00:21:41.794 [2024-12-01 15:02:14.766672] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2317c90 is same with the state(5) to be set 00:21:41.794 [2024-12-01 15:02:14.766678] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2317c90 is same with the state(5) to be set 00:21:41.794 [2024-12-01 15:02:14.766691] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2317c90 is same with the state(5) to be set 00:21:41.795 [2024-12-01 15:02:14.766698] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2317c90 is same with the state(5) to be set 00:21:41.795 [2024-12-01 15:02:14.766705] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2317c90 is same with the state(5) to be set 00:21:41.795 [2024-12-01 15:02:14.766712] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2317c90 is same with the state(5) to be set 00:21:41.795 [2024-12-01 15:02:14.766719] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2317c90 is same with the state(5) to be set 00:21:41.795 15:02:14 -- host/failover.sh@45 -- # sleep 3 00:21:45.081 15:02:17 -- host/failover.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:21:45.081 00:21:45.081 15:02:18 -- host/failover.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:21:45.339 [2024-12-01 15:02:18.392331] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2319380 is same with the state(5) to be set 00:21:45.339 [2024-12-01 15:02:18.392400] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2319380 is same with the state(5) to be set 00:21:45.339 [2024-12-01 15:02:18.392431] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2319380 is same with the state(5) to be set 00:21:45.339 [2024-12-01 15:02:18.392440] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2319380 is same with the state(5) to be set 00:21:45.339 [2024-12-01 15:02:18.392449] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2319380 is same with the state(5) to be set 00:21:45.339 [2024-12-01 15:02:18.392458] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2319380 is same with the state(5) to be set 00:21:45.339 [2024-12-01 15:02:18.392466] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2319380 is same with the state(5) to be set 00:21:45.339 [2024-12-01 15:02:18.392475] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2319380 is same with the state(5) to be set 00:21:45.339 [2024-12-01 15:02:18.392484] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2319380 is same with the state(5) to be set 00:21:45.339 [2024-12-01 15:02:18.392493] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2319380 is same with the state(5) to be set 00:21:45.339 [2024-12-01 15:02:18.392502] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2319380 is same with the state(5) to be set 00:21:45.339 [2024-12-01 15:02:18.392511] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2319380 is same with the state(5) to be set 00:21:45.339 [2024-12-01 15:02:18.392520] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2319380 is same with the state(5) to be set 00:21:45.339 [2024-12-01 15:02:18.392529] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2319380 is same with the state(5) to be set 00:21:45.339 [2024-12-01 15:02:18.392538] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2319380 is same with the state(5) to be set 00:21:45.339 [2024-12-01 15:02:18.392547] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2319380 is same with the state(5) to be set 00:21:45.339 [2024-12-01 15:02:18.392572] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2319380 is same with the state(5) to be set 00:21:45.339 [2024-12-01 15:02:18.392598] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2319380 is same with the state(5) to be set 00:21:45.339 [2024-12-01 15:02:18.392608] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2319380 is same with the state(5) to be set 00:21:45.339 [2024-12-01 15:02:18.392617] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2319380 is same with the state(5) to be set 00:21:45.339 [2024-12-01 15:02:18.392626] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2319380 is same with the state(5) to be set 00:21:45.339 [2024-12-01 15:02:18.392635] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2319380 is same with the state(5) to be set 00:21:45.339 [2024-12-01 15:02:18.392643] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2319380 is same with the state(5) to be set 00:21:45.340 [2024-12-01 15:02:18.392652] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2319380 is same with the state(5) to be set 00:21:45.340 [2024-12-01 15:02:18.392661] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2319380 is same with the state(5) to be set 00:21:45.340 [2024-12-01 15:02:18.392669] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2319380 is same with the state(5) to be set 00:21:45.340 [2024-12-01 15:02:18.392678] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2319380 is same with the state(5) to be set 00:21:45.340 15:02:18 -- host/failover.sh@50 -- # sleep 3 00:21:48.625 15:02:21 -- host/failover.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:48.625 [2024-12-01 15:02:21.674366] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:48.625 15:02:21 -- host/failover.sh@55 -- # sleep 1 00:21:50.002 15:02:22 -- host/failover.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:21:50.002 [2024-12-01 15:02:22.944615] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2319a60 is same with the state(5) to be set 00:21:50.002 [2024-12-01 15:02:22.945307] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2319a60 is same with the state(5) to be set 00:21:50.002 [2024-12-01 15:02:22.945406] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2319a60 is same with the state(5) to be set 00:21:50.002 [2024-12-01 15:02:22.945419] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2319a60 is same with the state(5) to be set 00:21:50.002 [2024-12-01 15:02:22.945428] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2319a60 is same with the state(5) to be set 00:21:50.002 [2024-12-01 15:02:22.945438] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2319a60 is same with the state(5) to be set 00:21:50.002 [2024-12-01 15:02:22.945447] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2319a60 is same with the state(5) to be set 00:21:50.002 [2024-12-01 15:02:22.945458] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2319a60 is same with the state(5) to be set 00:21:50.002 [2024-12-01 15:02:22.945467] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2319a60 is same with the state(5) to be set 00:21:50.002 [2024-12-01 15:02:22.945478] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2319a60 is same with the state(5) to be set 00:21:50.002 [2024-12-01 15:02:22.945487] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2319a60 is same with the state(5) to be set 00:21:50.002 [2024-12-01 15:02:22.945495] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2319a60 is same with the state(5) to be set 00:21:50.002 [2024-12-01 15:02:22.945504] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2319a60 is same with the state(5) to be set 00:21:50.002 [2024-12-01 15:02:22.945512] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2319a60 is same with the state(5) to be set 00:21:50.002 [2024-12-01 15:02:22.945521] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2319a60 is same with the state(5) to be set 00:21:50.002 [2024-12-01 15:02:22.945531] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2319a60 is same with the state(5) to be set 00:21:50.002 [2024-12-01 15:02:22.945539] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2319a60 is same with the state(5) to be set 00:21:50.002 [2024-12-01 15:02:22.945548] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2319a60 is same with the state(5) to be set 00:21:50.002 [2024-12-01 15:02:22.945557] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2319a60 is same with the state(5) to be set 00:21:50.002 [2024-12-01 15:02:22.945565] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2319a60 is same with the state(5) to be set 00:21:50.002 [2024-12-01 15:02:22.945574] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2319a60 is same with the state(5) to be set 00:21:50.002 [2024-12-01 15:02:22.945583] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2319a60 is same with the state(5) to be set 00:21:50.002 [2024-12-01 15:02:22.945591] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2319a60 is same with the state(5) to be set 00:21:50.002 [2024-12-01 15:02:22.945600] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2319a60 is same with the state(5) to be set 00:21:50.002 [2024-12-01 15:02:22.945609] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2319a60 is same with the state(5) to be set 00:21:50.002 [2024-12-01 15:02:22.945618] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2319a60 is same with the state(5) to be set 00:21:50.002 [2024-12-01 15:02:22.945627] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2319a60 is same with the state(5) to be set 00:21:50.002 [2024-12-01 15:02:22.945636] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2319a60 is same with the state(5) to be set 00:21:50.002 [2024-12-01 15:02:22.945677] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2319a60 is same with the state(5) to be set 00:21:50.002 [2024-12-01 15:02:22.945688] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2319a60 is same with the state(5) to be set 00:21:50.002 [2024-12-01 15:02:22.945697] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2319a60 is same with the state(5) to be set 00:21:50.003 [2024-12-01 15:02:22.945709] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2319a60 is same with the state(5) to be set 00:21:50.003 [2024-12-01 15:02:22.945718] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2319a60 is same with the state(5) to be set 00:21:50.003 [2024-12-01 15:02:22.945728] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2319a60 is same with the state(5) to be set 00:21:50.003 [2024-12-01 15:02:22.945739] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2319a60 is same with the state(5) to be set 00:21:50.003 [2024-12-01 15:02:22.945748] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2319a60 is same with the state(5) to be set 00:21:50.003 [2024-12-01 15:02:22.945767] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2319a60 is same with the state(5) to be set 00:21:50.003 [2024-12-01 15:02:22.945777] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2319a60 is same with the state(5) to be set 00:21:50.003 [2024-12-01 15:02:22.945786] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2319a60 is same with the state(5) to be set 00:21:50.003 [2024-12-01 15:02:22.945797] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2319a60 is same with the state(5) to be set 00:21:50.003 15:02:22 -- host/failover.sh@59 -- # wait 95682 00:21:56.616 0 00:21:56.616 15:02:28 -- host/failover.sh@61 -- # killprocess 95629 00:21:56.616 15:02:28 -- common/autotest_common.sh@936 -- # '[' -z 95629 ']' 00:21:56.616 15:02:28 -- common/autotest_common.sh@940 -- # kill -0 95629 00:21:56.616 15:02:28 -- common/autotest_common.sh@941 -- # uname 00:21:56.616 15:02:28 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:21:56.616 15:02:28 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 95629 00:21:56.616 killing process with pid 95629 00:21:56.616 15:02:28 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:21:56.616 15:02:28 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:21:56.616 15:02:28 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 95629' 00:21:56.616 15:02:28 -- common/autotest_common.sh@955 -- # kill 95629 00:21:56.616 15:02:28 -- common/autotest_common.sh@960 -- # wait 95629 00:21:56.616 15:02:28 -- host/failover.sh@63 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:21:56.616 [2024-12-01 15:02:11.965182] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:21:56.616 [2024-12-01 15:02:11.965284] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid95629 ] 00:21:56.616 [2024-12-01 15:02:12.102129] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:56.616 [2024-12-01 15:02:12.169823] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:21:56.616 Running I/O for 15 seconds... 00:21:56.616 [2024-12-01 15:02:14.767004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:9936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.616 [2024-12-01 15:02:14.767053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.616 [2024-12-01 15:02:14.767078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:9944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.616 [2024-12-01 15:02:14.767093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.616 [2024-12-01 15:02:14.767115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:9952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.616 [2024-12-01 15:02:14.767127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.616 [2024-12-01 15:02:14.767140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:9984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.616 [2024-12-01 15:02:14.767152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.616 [2024-12-01 15:02:14.767171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:9360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.616 [2024-12-01 15:02:14.767183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.616 [2024-12-01 15:02:14.767197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:9368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.616 [2024-12-01 15:02:14.767210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.616 [2024-12-01 15:02:14.767223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:9376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.616 [2024-12-01 15:02:14.767237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.616 [2024-12-01 15:02:14.767251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:9392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.616 [2024-12-01 15:02:14.767263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.616 [2024-12-01 15:02:14.767277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:9400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.616 [2024-12-01 15:02:14.767288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.616 [2024-12-01 15:02:14.767302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:9408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.616 [2024-12-01 15:02:14.767314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.616 [2024-12-01 15:02:14.767328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:9416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.616 [2024-12-01 15:02:14.767340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.616 [2024-12-01 15:02:14.767381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:9424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.617 [2024-12-01 15:02:14.767395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.617 [2024-12-01 15:02:14.767409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:9992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.617 [2024-12-01 15:02:14.767421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.617 [2024-12-01 15:02:14.767434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:10024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.617 [2024-12-01 15:02:14.767447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.617 [2024-12-01 15:02:14.767460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:10040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.617 [2024-12-01 15:02:14.767471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.617 [2024-12-01 15:02:14.767484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:10048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.617 [2024-12-01 15:02:14.767502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.617 [2024-12-01 15:02:14.767516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:10056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.617 [2024-12-01 15:02:14.767528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.617 [2024-12-01 15:02:14.767541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:10064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.617 [2024-12-01 15:02:14.767553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.617 [2024-12-01 15:02:14.767567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:10072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.617 [2024-12-01 15:02:14.767579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.617 [2024-12-01 15:02:14.767592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:10080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.617 [2024-12-01 15:02:14.767603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.617 [2024-12-01 15:02:14.767616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:10088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.617 [2024-12-01 15:02:14.767627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.617 [2024-12-01 15:02:14.767640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:10096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.617 [2024-12-01 15:02:14.767652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.617 [2024-12-01 15:02:14.767665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:10104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.617 [2024-12-01 15:02:14.767677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.617 [2024-12-01 15:02:14.767689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:10112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.617 [2024-12-01 15:02:14.767709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.617 [2024-12-01 15:02:14.767723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:10120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.617 [2024-12-01 15:02:14.767735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.617 [2024-12-01 15:02:14.767748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:10128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.617 [2024-12-01 15:02:14.767785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.617 [2024-12-01 15:02:14.767801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:10136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.617 [2024-12-01 15:02:14.767814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.617 [2024-12-01 15:02:14.767827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:10144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.617 [2024-12-01 15:02:14.767850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.617 [2024-12-01 15:02:14.767863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:10152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.617 [2024-12-01 15:02:14.767875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.617 [2024-12-01 15:02:14.767888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:10160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.617 [2024-12-01 15:02:14.767899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.617 [2024-12-01 15:02:14.767912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:9464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.617 [2024-12-01 15:02:14.767924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.617 [2024-12-01 15:02:14.767937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:9480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.617 [2024-12-01 15:02:14.767953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.617 [2024-12-01 15:02:14.767967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:9504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.617 [2024-12-01 15:02:14.767979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.617 [2024-12-01 15:02:14.767992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:9512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.617 [2024-12-01 15:02:14.768004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.617 [2024-12-01 15:02:14.768016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:9528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.617 [2024-12-01 15:02:14.768028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.617 [2024-12-01 15:02:14.768041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:9536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.617 [2024-12-01 15:02:14.768053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.617 [2024-12-01 15:02:14.768066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:9544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.617 [2024-12-01 15:02:14.768087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.617 [2024-12-01 15:02:14.768101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:9576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.617 [2024-12-01 15:02:14.768113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.617 [2024-12-01 15:02:14.768126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:9584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.617 [2024-12-01 15:02:14.768138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.617 [2024-12-01 15:02:14.768151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:9600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.617 [2024-12-01 15:02:14.768170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.617 [2024-12-01 15:02:14.768195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:9616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.617 [2024-12-01 15:02:14.768206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.617 [2024-12-01 15:02:14.768219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:9624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.617 [2024-12-01 15:02:14.768231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.617 [2024-12-01 15:02:14.768244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:9632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.617 [2024-12-01 15:02:14.768255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.617 [2024-12-01 15:02:14.768269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:9648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.617 [2024-12-01 15:02:14.768280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.617 [2024-12-01 15:02:14.768293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:9656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.617 [2024-12-01 15:02:14.768304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.617 [2024-12-01 15:02:14.768318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:9672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.617 [2024-12-01 15:02:14.768329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.617 [2024-12-01 15:02:14.768342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:10168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.617 [2024-12-01 15:02:14.768354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.617 [2024-12-01 15:02:14.768367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:10176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.617 [2024-12-01 15:02:14.768383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.617 [2024-12-01 15:02:14.768396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:10184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.617 [2024-12-01 15:02:14.768408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.617 [2024-12-01 15:02:14.768427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:10192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.617 [2024-12-01 15:02:14.768439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.617 [2024-12-01 15:02:14.768452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:10200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.617 [2024-12-01 15:02:14.768464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.617 [2024-12-01 15:02:14.768477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:10208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.617 [2024-12-01 15:02:14.768489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.618 [2024-12-01 15:02:14.768503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:10216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.618 [2024-12-01 15:02:14.768515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.618 [2024-12-01 15:02:14.768527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:10224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.618 [2024-12-01 15:02:14.768539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.618 [2024-12-01 15:02:14.768551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:10232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.618 [2024-12-01 15:02:14.768563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.618 [2024-12-01 15:02:14.768576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:10240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.618 [2024-12-01 15:02:14.768587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.618 [2024-12-01 15:02:14.768600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:10248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.618 [2024-12-01 15:02:14.768612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.618 [2024-12-01 15:02:14.768625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:10256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.618 [2024-12-01 15:02:14.768637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.618 [2024-12-01 15:02:14.768649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:10264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.618 [2024-12-01 15:02:14.768661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.618 [2024-12-01 15:02:14.768675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:10272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.618 [2024-12-01 15:02:14.768686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.618 [2024-12-01 15:02:14.768699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:10280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.618 [2024-12-01 15:02:14.768711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.618 [2024-12-01 15:02:14.768724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:10288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.618 [2024-12-01 15:02:14.768742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.618 [2024-12-01 15:02:14.768767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:10296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.618 [2024-12-01 15:02:14.768782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.618 [2024-12-01 15:02:14.768796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:10304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.618 [2024-12-01 15:02:14.768813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.618 [2024-12-01 15:02:14.768827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:10312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.618 [2024-12-01 15:02:14.768839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.618 [2024-12-01 15:02:14.768852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:10320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.618 [2024-12-01 15:02:14.768864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.618 [2024-12-01 15:02:14.768877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:10328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.618 [2024-12-01 15:02:14.768888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.618 [2024-12-01 15:02:14.768901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:10336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.618 [2024-12-01 15:02:14.768912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.618 [2024-12-01 15:02:14.768925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:10344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.618 [2024-12-01 15:02:14.768937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.618 [2024-12-01 15:02:14.768955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:10352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.618 [2024-12-01 15:02:14.768968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.618 [2024-12-01 15:02:14.768980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:10360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.618 [2024-12-01 15:02:14.768992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.618 [2024-12-01 15:02:14.769005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:10368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.618 [2024-12-01 15:02:14.769016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.618 [2024-12-01 15:02:14.769029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:10376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.618 [2024-12-01 15:02:14.769041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.618 [2024-12-01 15:02:14.769053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:9680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.618 [2024-12-01 15:02:14.769065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.618 [2024-12-01 15:02:14.769085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:9688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.618 [2024-12-01 15:02:14.769097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.618 [2024-12-01 15:02:14.769110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:9696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.618 [2024-12-01 15:02:14.769133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.618 [2024-12-01 15:02:14.769146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.618 [2024-12-01 15:02:14.769158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.618 [2024-12-01 15:02:14.769171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:9744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.618 [2024-12-01 15:02:14.769182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.618 [2024-12-01 15:02:14.769196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:9752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.618 [2024-12-01 15:02:14.769208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.618 [2024-12-01 15:02:14.769221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:9760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.618 [2024-12-01 15:02:14.769237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.618 [2024-12-01 15:02:14.769250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:9776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.618 [2024-12-01 15:02:14.769262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.618 [2024-12-01 15:02:14.769275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:9784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.618 [2024-12-01 15:02:14.769287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.618 [2024-12-01 15:02:14.769299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:9824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.618 [2024-12-01 15:02:14.769310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.618 [2024-12-01 15:02:14.769323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:9832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.618 [2024-12-01 15:02:14.769347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.618 [2024-12-01 15:02:14.769379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:9888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.618 [2024-12-01 15:02:14.769392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.618 [2024-12-01 15:02:14.769411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:9896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.618 [2024-12-01 15:02:14.769424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.618 [2024-12-01 15:02:14.769438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:9904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.618 [2024-12-01 15:02:14.769450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.618 [2024-12-01 15:02:14.769471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:9912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.618 [2024-12-01 15:02:14.769484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.618 [2024-12-01 15:02:14.769498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:9920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.618 [2024-12-01 15:02:14.769510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.618 [2024-12-01 15:02:14.769523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:10384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.618 [2024-12-01 15:02:14.769535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.618 [2024-12-01 15:02:14.769549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:10392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.618 [2024-12-01 15:02:14.769561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.618 [2024-12-01 15:02:14.769575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:10400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.618 [2024-12-01 15:02:14.769587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.618 [2024-12-01 15:02:14.769600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:10408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.618 [2024-12-01 15:02:14.769612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.618 [2024-12-01 15:02:14.769626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:10416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.618 [2024-12-01 15:02:14.769638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.618 [2024-12-01 15:02:14.769652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:10424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.619 [2024-12-01 15:02:14.769664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.619 [2024-12-01 15:02:14.769677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:10432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.619 [2024-12-01 15:02:14.769694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.619 [2024-12-01 15:02:14.769708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:10440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.619 [2024-12-01 15:02:14.769720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.619 [2024-12-01 15:02:14.769733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:10448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.619 [2024-12-01 15:02:14.769745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.619 [2024-12-01 15:02:14.769759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:10456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.619 [2024-12-01 15:02:14.769797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.619 [2024-12-01 15:02:14.769811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:10464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.619 [2024-12-01 15:02:14.769834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.619 [2024-12-01 15:02:14.769848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:10472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.619 [2024-12-01 15:02:14.769861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.619 [2024-12-01 15:02:14.769876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:10480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.619 [2024-12-01 15:02:14.769888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.619 [2024-12-01 15:02:14.769900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:10488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.619 [2024-12-01 15:02:14.769912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.619 [2024-12-01 15:02:14.769925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:10496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.619 [2024-12-01 15:02:14.769937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.619 [2024-12-01 15:02:14.769951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:10504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.619 [2024-12-01 15:02:14.769962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.619 [2024-12-01 15:02:14.769975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:10512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.619 [2024-12-01 15:02:14.769987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.619 [2024-12-01 15:02:14.770000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:10520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.619 [2024-12-01 15:02:14.770012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.619 [2024-12-01 15:02:14.770024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:10528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.619 [2024-12-01 15:02:14.770036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.619 [2024-12-01 15:02:14.770048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:10536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.619 [2024-12-01 15:02:14.770060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.619 [2024-12-01 15:02:14.770073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:10544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.619 [2024-12-01 15:02:14.770085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.619 [2024-12-01 15:02:14.770097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:10552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.619 [2024-12-01 15:02:14.770108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.619 [2024-12-01 15:02:14.770122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:10560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.619 [2024-12-01 15:02:14.770138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.619 [2024-12-01 15:02:14.770158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:10568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.619 [2024-12-01 15:02:14.770170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.619 [2024-12-01 15:02:14.770183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:10576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.619 [2024-12-01 15:02:14.770194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.619 [2024-12-01 15:02:14.770208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:10584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.619 [2024-12-01 15:02:14.770219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.619 [2024-12-01 15:02:14.770232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:10592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.619 [2024-12-01 15:02:14.770244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.619 [2024-12-01 15:02:14.770257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:10600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.619 [2024-12-01 15:02:14.770268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.619 [2024-12-01 15:02:14.770281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:10608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.619 [2024-12-01 15:02:14.770293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.619 [2024-12-01 15:02:14.770306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:10616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.619 [2024-12-01 15:02:14.770317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.619 [2024-12-01 15:02:14.770330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:10624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.619 [2024-12-01 15:02:14.770342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.619 [2024-12-01 15:02:14.770354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:9928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.619 [2024-12-01 15:02:14.770366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.619 [2024-12-01 15:02:14.770379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:9960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.619 [2024-12-01 15:02:14.770390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.619 [2024-12-01 15:02:14.770403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:9968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.619 [2024-12-01 15:02:14.770415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.619 [2024-12-01 15:02:14.770428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:9976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.619 [2024-12-01 15:02:14.770440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.619 [2024-12-01 15:02:14.770453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:10000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.619 [2024-12-01 15:02:14.770470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.619 [2024-12-01 15:02:14.770484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:10008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.619 [2024-12-01 15:02:14.770496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.619 [2024-12-01 15:02:14.770508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:10016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.619 [2024-12-01 15:02:14.770520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.619 [2024-12-01 15:02:14.770533] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ede130 is same with the state(5) to be set 00:21:56.619 [2024-12-01 15:02:14.770552] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:56.619 [2024-12-01 15:02:14.770561] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:56.619 [2024-12-01 15:02:14.770570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:10032 len:8 PRP1 0x0 PRP2 0x0 00:21:56.619 [2024-12-01 15:02:14.770582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.619 [2024-12-01 15:02:14.770636] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1ede130 was disconnected and freed. reset controller. 00:21:56.619 [2024-12-01 15:02:14.770652] bdev_nvme.c:1843:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:21:56.619 [2024-12-01 15:02:14.770700] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:56.619 [2024-12-01 15:02:14.770719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.619 [2024-12-01 15:02:14.770733] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:56.619 [2024-12-01 15:02:14.770744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.619 [2024-12-01 15:02:14.770770] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:56.619 [2024-12-01 15:02:14.770783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.619 [2024-12-01 15:02:14.770795] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:56.619 [2024-12-01 15:02:14.770807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.619 [2024-12-01 15:02:14.770818] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:56.619 [2024-12-01 15:02:14.770868] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e59cb0 (9): Bad file descriptor 00:21:56.619 [2024-12-01 15:02:14.772953] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:56.619 [2024-12-01 15:02:14.789073] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:21:56.619 [2024-12-01 15:02:18.392815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:51712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.619 [2024-12-01 15:02:18.392849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.619 [2024-12-01 15:02:18.392870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:51736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.620 [2024-12-01 15:02:18.392898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.620 [2024-12-01 15:02:18.392915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:51752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.620 [2024-12-01 15:02:18.392928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.620 [2024-12-01 15:02:18.392942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:51760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.620 [2024-12-01 15:02:18.392954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.620 [2024-12-01 15:02:18.392967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:51776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.620 [2024-12-01 15:02:18.392978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.620 [2024-12-01 15:02:18.392992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:51128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.620 [2024-12-01 15:02:18.393003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.620 [2024-12-01 15:02:18.393016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:51152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.620 [2024-12-01 15:02:18.393029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.620 [2024-12-01 15:02:18.393042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:51160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.620 [2024-12-01 15:02:18.393054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.620 [2024-12-01 15:02:18.393067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:51168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.620 [2024-12-01 15:02:18.393079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.620 [2024-12-01 15:02:18.393100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:51184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.620 [2024-12-01 15:02:18.393111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.620 [2024-12-01 15:02:18.393129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:51192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.620 [2024-12-01 15:02:18.393151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.620 [2024-12-01 15:02:18.393164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:51200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.620 [2024-12-01 15:02:18.393176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.620 [2024-12-01 15:02:18.393189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:51208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.620 [2024-12-01 15:02:18.393201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.620 [2024-12-01 15:02:18.393214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:51792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.620 [2024-12-01 15:02:18.393237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.620 [2024-12-01 15:02:18.393250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:51800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.620 [2024-12-01 15:02:18.393269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.620 [2024-12-01 15:02:18.393284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:51808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.620 [2024-12-01 15:02:18.393296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.620 [2024-12-01 15:02:18.393309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:51816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.620 [2024-12-01 15:02:18.393322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.620 [2024-12-01 15:02:18.393402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:51824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.620 [2024-12-01 15:02:18.393418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.620 [2024-12-01 15:02:18.393433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:51856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.620 [2024-12-01 15:02:18.393446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.620 [2024-12-01 15:02:18.393460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:51880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.620 [2024-12-01 15:02:18.393472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.620 [2024-12-01 15:02:18.393486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:51928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.620 [2024-12-01 15:02:18.393499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.620 [2024-12-01 15:02:18.393513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:51936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.620 [2024-12-01 15:02:18.393526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.620 [2024-12-01 15:02:18.393540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:51944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.620 [2024-12-01 15:02:18.393553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.620 [2024-12-01 15:02:18.393567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:51952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.620 [2024-12-01 15:02:18.393579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.620 [2024-12-01 15:02:18.393594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:51960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.620 [2024-12-01 15:02:18.393607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.620 [2024-12-01 15:02:18.393621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:51968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.620 [2024-12-01 15:02:18.393634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.620 [2024-12-01 15:02:18.393648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:51976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.620 [2024-12-01 15:02:18.393661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.620 [2024-12-01 15:02:18.393700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:51216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.620 [2024-12-01 15:02:18.393720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.620 [2024-12-01 15:02:18.393734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:51224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.620 [2024-12-01 15:02:18.393746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.620 [2024-12-01 15:02:18.393760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:51240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.620 [2024-12-01 15:02:18.393811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.620 [2024-12-01 15:02:18.393838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:51248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.620 [2024-12-01 15:02:18.393851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.620 [2024-12-01 15:02:18.393864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:51264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.620 [2024-12-01 15:02:18.393876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.620 [2024-12-01 15:02:18.393890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:51280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.620 [2024-12-01 15:02:18.393902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.620 [2024-12-01 15:02:18.393916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:51296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.620 [2024-12-01 15:02:18.393928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.620 [2024-12-01 15:02:18.393941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:51304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.620 [2024-12-01 15:02:18.393953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.620 [2024-12-01 15:02:18.393966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:51320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.620 [2024-12-01 15:02:18.393978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.620 [2024-12-01 15:02:18.393991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:51328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.620 [2024-12-01 15:02:18.394002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.620 [2024-12-01 15:02:18.394015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:51336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.620 [2024-12-01 15:02:18.394026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.620 [2024-12-01 15:02:18.394039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:51344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.620 [2024-12-01 15:02:18.394051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.620 [2024-12-01 15:02:18.394064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:51352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.621 [2024-12-01 15:02:18.394082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.621 [2024-12-01 15:02:18.394096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:51360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.621 [2024-12-01 15:02:18.394116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.621 [2024-12-01 15:02:18.394129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:51368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.621 [2024-12-01 15:02:18.394140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.621 [2024-12-01 15:02:18.394153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:51384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.621 [2024-12-01 15:02:18.394165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.621 [2024-12-01 15:02:18.394177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:51984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.621 [2024-12-01 15:02:18.394189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.621 [2024-12-01 15:02:18.394202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:51992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.621 [2024-12-01 15:02:18.394213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.621 [2024-12-01 15:02:18.394226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:52000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.621 [2024-12-01 15:02:18.394237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.621 [2024-12-01 15:02:18.394250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:52008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.621 [2024-12-01 15:02:18.394262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.621 [2024-12-01 15:02:18.394274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:52016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.621 [2024-12-01 15:02:18.394286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.621 [2024-12-01 15:02:18.394299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:52024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.621 [2024-12-01 15:02:18.394310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.621 [2024-12-01 15:02:18.394324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:51392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.621 [2024-12-01 15:02:18.394336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.621 [2024-12-01 15:02:18.394349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:51400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.621 [2024-12-01 15:02:18.394361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.621 [2024-12-01 15:02:18.394374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:51416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.621 [2024-12-01 15:02:18.394385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.621 [2024-12-01 15:02:18.394404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:51424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.621 [2024-12-01 15:02:18.394416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.621 [2024-12-01 15:02:18.394429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:51448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.621 [2024-12-01 15:02:18.394441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.621 [2024-12-01 15:02:18.394454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:51464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.621 [2024-12-01 15:02:18.394466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.621 [2024-12-01 15:02:18.394479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:51472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.621 [2024-12-01 15:02:18.394490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.621 [2024-12-01 15:02:18.394503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:51496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.621 [2024-12-01 15:02:18.394514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.621 [2024-12-01 15:02:18.394527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:51512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.621 [2024-12-01 15:02:18.394539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.621 [2024-12-01 15:02:18.394552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:51520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.621 [2024-12-01 15:02:18.394563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.621 [2024-12-01 15:02:18.394576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:51528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.621 [2024-12-01 15:02:18.394589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.621 [2024-12-01 15:02:18.394602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:51536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.621 [2024-12-01 15:02:18.394614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.621 [2024-12-01 15:02:18.394627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:51560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.621 [2024-12-01 15:02:18.394638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.621 [2024-12-01 15:02:18.394651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:51656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.621 [2024-12-01 15:02:18.394663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.621 [2024-12-01 15:02:18.394676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:51672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.621 [2024-12-01 15:02:18.394687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.621 [2024-12-01 15:02:18.394700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:51688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.621 [2024-12-01 15:02:18.394718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.621 [2024-12-01 15:02:18.394732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:52032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.621 [2024-12-01 15:02:18.394744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.621 [2024-12-01 15:02:18.394757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:52040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.621 [2024-12-01 15:02:18.394801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.621 [2024-12-01 15:02:18.394816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:52048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.621 [2024-12-01 15:02:18.394828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.621 [2024-12-01 15:02:18.394842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:52056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.621 [2024-12-01 15:02:18.394854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.621 [2024-12-01 15:02:18.394867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:52064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.621 [2024-12-01 15:02:18.394879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.621 [2024-12-01 15:02:18.394892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:52072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.621 [2024-12-01 15:02:18.394904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.621 [2024-12-01 15:02:18.394918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:52080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.621 [2024-12-01 15:02:18.394930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.621 [2024-12-01 15:02:18.394943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:52088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.621 [2024-12-01 15:02:18.394955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.621 [2024-12-01 15:02:18.394969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:52096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.621 [2024-12-01 15:02:18.394981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.621 [2024-12-01 15:02:18.394994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:52104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.621 [2024-12-01 15:02:18.395006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.621 [2024-12-01 15:02:18.395019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:52112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.621 [2024-12-01 15:02:18.395031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.621 [2024-12-01 15:02:18.395044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:52120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.621 [2024-12-01 15:02:18.395055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.621 [2024-12-01 15:02:18.395069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:52128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.621 [2024-12-01 15:02:18.395090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.621 [2024-12-01 15:02:18.395104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:52136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.621 [2024-12-01 15:02:18.395116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.621 [2024-12-01 15:02:18.395137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:52144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.621 [2024-12-01 15:02:18.395149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.621 [2024-12-01 15:02:18.395162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:52152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.621 [2024-12-01 15:02:18.395173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.621 [2024-12-01 15:02:18.395187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:52160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.621 [2024-12-01 15:02:18.395198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.621 [2024-12-01 15:02:18.395211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:52168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.622 [2024-12-01 15:02:18.395223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.622 [2024-12-01 15:02:18.395236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:52176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.622 [2024-12-01 15:02:18.395248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.622 [2024-12-01 15:02:18.395261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:52184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.622 [2024-12-01 15:02:18.395273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.622 [2024-12-01 15:02:18.395286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:52192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.622 [2024-12-01 15:02:18.395297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.622 [2024-12-01 15:02:18.395310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:52200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.622 [2024-12-01 15:02:18.395322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.622 [2024-12-01 15:02:18.395335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:52208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.622 [2024-12-01 15:02:18.395347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.622 [2024-12-01 15:02:18.395360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:52216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.622 [2024-12-01 15:02:18.395372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.622 [2024-12-01 15:02:18.395385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:52224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.622 [2024-12-01 15:02:18.395397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.622 [2024-12-01 15:02:18.395416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:52232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.622 [2024-12-01 15:02:18.395428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.622 [2024-12-01 15:02:18.395445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:52240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.622 [2024-12-01 15:02:18.395458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.622 [2024-12-01 15:02:18.395471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:52248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.622 [2024-12-01 15:02:18.395482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.622 [2024-12-01 15:02:18.395495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:52256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.622 [2024-12-01 15:02:18.395507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.622 [2024-12-01 15:02:18.395521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:52264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.622 [2024-12-01 15:02:18.395532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.622 [2024-12-01 15:02:18.395545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:52272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.622 [2024-12-01 15:02:18.395557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.622 [2024-12-01 15:02:18.395570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:52280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.622 [2024-12-01 15:02:18.395582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.622 [2024-12-01 15:02:18.395595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:52288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.622 [2024-12-01 15:02:18.395606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.622 [2024-12-01 15:02:18.395619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:52296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.622 [2024-12-01 15:02:18.395631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.622 [2024-12-01 15:02:18.395644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:52304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.622 [2024-12-01 15:02:18.395656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.622 [2024-12-01 15:02:18.395670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:52312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.622 [2024-12-01 15:02:18.395682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.622 [2024-12-01 15:02:18.395696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:52320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.622 [2024-12-01 15:02:18.395708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.622 [2024-12-01 15:02:18.395721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:52328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.622 [2024-12-01 15:02:18.395739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.622 [2024-12-01 15:02:18.395762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:52336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.622 [2024-12-01 15:02:18.395777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.622 [2024-12-01 15:02:18.395790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:52344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.622 [2024-12-01 15:02:18.395802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.622 [2024-12-01 15:02:18.395815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:52352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.622 [2024-12-01 15:02:18.395826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.622 [2024-12-01 15:02:18.395839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:52360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.622 [2024-12-01 15:02:18.395850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.622 [2024-12-01 15:02:18.395869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:52368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.622 [2024-12-01 15:02:18.395881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.622 [2024-12-01 15:02:18.395895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:52376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.622 [2024-12-01 15:02:18.395906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.622 [2024-12-01 15:02:18.395919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:52384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.622 [2024-12-01 15:02:18.395931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.622 [2024-12-01 15:02:18.395944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:52392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.622 [2024-12-01 15:02:18.395955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.622 [2024-12-01 15:02:18.395968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:52400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.622 [2024-12-01 15:02:18.395981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.622 [2024-12-01 15:02:18.395994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:51704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.622 [2024-12-01 15:02:18.396006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.622 [2024-12-01 15:02:18.396019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:51720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.622 [2024-12-01 15:02:18.396030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.622 [2024-12-01 15:02:18.396043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:51728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.622 [2024-12-01 15:02:18.396055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.622 [2024-12-01 15:02:18.396076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:51744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.622 [2024-12-01 15:02:18.396088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.622 [2024-12-01 15:02:18.396102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:51768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.622 [2024-12-01 15:02:18.396119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.622 [2024-12-01 15:02:18.396132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:51784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.622 [2024-12-01 15:02:18.396143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.622 [2024-12-01 15:02:18.396156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:51832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.622 [2024-12-01 15:02:18.396167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.622 [2024-12-01 15:02:18.396180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:51840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.622 [2024-12-01 15:02:18.396191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.622 [2024-12-01 15:02:18.396205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:51848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.622 [2024-12-01 15:02:18.396217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.622 [2024-12-01 15:02:18.396230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:51864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.622 [2024-12-01 15:02:18.396241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.622 [2024-12-01 15:02:18.396255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:51872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.622 [2024-12-01 15:02:18.396266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.622 [2024-12-01 15:02:18.396291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:51888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.622 [2024-12-01 15:02:18.396303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.622 [2024-12-01 15:02:18.396316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:51896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.622 [2024-12-01 15:02:18.396328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.622 [2024-12-01 15:02:18.396340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:51904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.622 [2024-12-01 15:02:18.396352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.622 [2024-12-01 15:02:18.396365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:51912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.622 [2024-12-01 15:02:18.396377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.622 [2024-12-01 15:02:18.396389] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eb8b10 is same with the state(5) to be set 00:21:56.622 [2024-12-01 15:02:18.396402] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:56.623 [2024-12-01 15:02:18.396417] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:56.623 [2024-12-01 15:02:18.396427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:51920 len:8 PRP1 0x0 PRP2 0x0 00:21:56.623 [2024-12-01 15:02:18.396438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.623 [2024-12-01 15:02:18.396472] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1eb8b10 was disconnected and freed. reset controller. 00:21:56.623 [2024-12-01 15:02:18.396487] bdev_nvme.c:1843:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4421 to 10.0.0.2:4422 00:21:56.623 [2024-12-01 15:02:18.396533] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:56.623 [2024-12-01 15:02:18.396553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.623 [2024-12-01 15:02:18.396565] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:56.623 [2024-12-01 15:02:18.396577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.623 [2024-12-01 15:02:18.396589] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:56.623 [2024-12-01 15:02:18.396600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.623 [2024-12-01 15:02:18.396613] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:56.623 [2024-12-01 15:02:18.396624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.623 [2024-12-01 15:02:18.396635] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:56.623 [2024-12-01 15:02:18.396660] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e59cb0 (9): Bad file descriptor 00:21:56.623 [2024-12-01 15:02:18.398422] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:56.623 [2024-12-01 15:02:18.417870] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:21:56.623 [2024-12-01 15:02:22.945932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:75856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.623 [2024-12-01 15:02:22.945983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.623 [2024-12-01 15:02:22.946005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:75864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.623 [2024-12-01 15:02:22.946020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.623 [2024-12-01 15:02:22.946034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:75880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.623 [2024-12-01 15:02:22.946046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.623 [2024-12-01 15:02:22.946060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:75224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.623 [2024-12-01 15:02:22.946072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.623 [2024-12-01 15:02:22.946085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:75240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.623 [2024-12-01 15:02:22.946100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.623 [2024-12-01 15:02:22.946127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:75248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.623 [2024-12-01 15:02:22.946140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.623 [2024-12-01 15:02:22.946154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:75272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.623 [2024-12-01 15:02:22.946165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.623 [2024-12-01 15:02:22.946178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:75280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.623 [2024-12-01 15:02:22.946189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.623 [2024-12-01 15:02:22.946202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:75304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.623 [2024-12-01 15:02:22.946226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.623 [2024-12-01 15:02:22.946239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:75328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.623 [2024-12-01 15:02:22.946250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.623 [2024-12-01 15:02:22.946274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:75336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.623 [2024-12-01 15:02:22.946285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.623 [2024-12-01 15:02:22.946298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:75904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.623 [2024-12-01 15:02:22.946312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.623 [2024-12-01 15:02:22.946325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:75928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.623 [2024-12-01 15:02:22.946338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.623 [2024-12-01 15:02:22.946351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:75936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.623 [2024-12-01 15:02:22.946362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.623 [2024-12-01 15:02:22.946387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:75944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.623 [2024-12-01 15:02:22.946399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.623 [2024-12-01 15:02:22.946411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:75960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.623 [2024-12-01 15:02:22.946422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.623 [2024-12-01 15:02:22.946435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:75984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.623 [2024-12-01 15:02:22.946448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.623 [2024-12-01 15:02:22.946478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:75344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.623 [2024-12-01 15:02:22.946498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.623 [2024-12-01 15:02:22.946513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:75360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.623 [2024-12-01 15:02:22.946541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.623 [2024-12-01 15:02:22.946570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:75376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.623 [2024-12-01 15:02:22.946599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.623 [2024-12-01 15:02:22.946613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:75400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.623 [2024-12-01 15:02:22.946626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.623 [2024-12-01 15:02:22.946641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:75416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.623 [2024-12-01 15:02:22.946654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.623 [2024-12-01 15:02:22.946669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:75424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.623 [2024-12-01 15:02:22.946682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.623 [2024-12-01 15:02:22.946696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:75440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.623 [2024-12-01 15:02:22.946709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.623 [2024-12-01 15:02:22.946724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:75448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.623 [2024-12-01 15:02:22.946737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.623 [2024-12-01 15:02:22.946751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:76032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.623 [2024-12-01 15:02:22.946764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.623 [2024-12-01 15:02:22.946779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:76056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.623 [2024-12-01 15:02:22.946792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.623 [2024-12-01 15:02:22.946807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:76064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.623 [2024-12-01 15:02:22.946837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.623 [2024-12-01 15:02:22.946854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:76072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.623 [2024-12-01 15:02:22.946868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.623 [2024-12-01 15:02:22.946903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:76080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.623 [2024-12-01 15:02:22.946948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.623 [2024-12-01 15:02:22.946970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:76088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.623 [2024-12-01 15:02:22.946993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.623 [2024-12-01 15:02:22.947018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:76096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.623 [2024-12-01 15:02:22.947030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.623 [2024-12-01 15:02:22.947044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:76104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.623 [2024-12-01 15:02:22.947057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.623 [2024-12-01 15:02:22.947070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:75456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.623 [2024-12-01 15:02:22.947081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.623 [2024-12-01 15:02:22.947094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:75464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.623 [2024-12-01 15:02:22.947106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.623 [2024-12-01 15:02:22.947119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:75496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.623 [2024-12-01 15:02:22.947130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.624 [2024-12-01 15:02:22.947143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:75520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.624 [2024-12-01 15:02:22.947154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.624 [2024-12-01 15:02:22.947167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:75528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.624 [2024-12-01 15:02:22.947179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.624 [2024-12-01 15:02:22.947191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:75536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.624 [2024-12-01 15:02:22.947203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.624 [2024-12-01 15:02:22.947216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:75560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.624 [2024-12-01 15:02:22.947227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.624 [2024-12-01 15:02:22.947240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:75584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.624 [2024-12-01 15:02:22.947251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.624 [2024-12-01 15:02:22.947264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:76112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.624 [2024-12-01 15:02:22.947276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.624 [2024-12-01 15:02:22.947289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:76120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.624 [2024-12-01 15:02:22.947307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.624 [2024-12-01 15:02:22.947321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:76128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.624 [2024-12-01 15:02:22.947332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.624 [2024-12-01 15:02:22.947345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:76136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.624 [2024-12-01 15:02:22.947357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.624 [2024-12-01 15:02:22.947372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:76144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.624 [2024-12-01 15:02:22.947385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.624 [2024-12-01 15:02:22.947408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:76152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.624 [2024-12-01 15:02:22.947420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.624 [2024-12-01 15:02:22.947433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:76160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.624 [2024-12-01 15:02:22.947445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.624 [2024-12-01 15:02:22.947458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:76168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.624 [2024-12-01 15:02:22.947469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.624 [2024-12-01 15:02:22.947482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:76176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.624 [2024-12-01 15:02:22.947493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.624 [2024-12-01 15:02:22.947506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:76184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.624 [2024-12-01 15:02:22.947518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.624 [2024-12-01 15:02:22.947531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:76192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.624 [2024-12-01 15:02:22.947542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.624 [2024-12-01 15:02:22.947555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:76200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.624 [2024-12-01 15:02:22.947566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.624 [2024-12-01 15:02:22.947579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:76208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.624 [2024-12-01 15:02:22.947590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.624 [2024-12-01 15:02:22.947603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:76216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.624 [2024-12-01 15:02:22.947614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.624 [2024-12-01 15:02:22.947627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:75608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.624 [2024-12-01 15:02:22.947644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.624 [2024-12-01 15:02:22.947657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:75632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.624 [2024-12-01 15:02:22.947669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.624 [2024-12-01 15:02:22.947685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:75648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.624 [2024-12-01 15:02:22.947696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.624 [2024-12-01 15:02:22.947709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:75672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.624 [2024-12-01 15:02:22.947722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.624 [2024-12-01 15:02:22.947738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:75688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.624 [2024-12-01 15:02:22.947749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.624 [2024-12-01 15:02:22.947778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:75696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.624 [2024-12-01 15:02:22.947798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.624 [2024-12-01 15:02:22.947856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:75704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.624 [2024-12-01 15:02:22.947888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.624 [2024-12-01 15:02:22.947903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:75720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.624 [2024-12-01 15:02:22.947932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.624 [2024-12-01 15:02:22.947947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:76224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.624 [2024-12-01 15:02:22.947961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.624 [2024-12-01 15:02:22.947976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:76232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.624 [2024-12-01 15:02:22.947989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.624 [2024-12-01 15:02:22.948004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:76240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.624 [2024-12-01 15:02:22.948018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.624 [2024-12-01 15:02:22.948033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:76248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.624 [2024-12-01 15:02:22.948047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.624 [2024-12-01 15:02:22.948062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:76256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.624 [2024-12-01 15:02:22.948075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.624 [2024-12-01 15:02:22.948098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:76264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.624 [2024-12-01 15:02:22.948112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.624 [2024-12-01 15:02:22.948127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:76272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.624 [2024-12-01 15:02:22.948141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.624 [2024-12-01 15:02:22.948156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:76280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.624 [2024-12-01 15:02:22.948169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.624 [2024-12-01 15:02:22.948184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:76288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.624 [2024-12-01 15:02:22.948197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.624 [2024-12-01 15:02:22.948212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:76296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.624 [2024-12-01 15:02:22.948255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.624 [2024-12-01 15:02:22.948269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:76304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.624 [2024-12-01 15:02:22.948282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.624 [2024-12-01 15:02:22.948310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:76312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.624 [2024-12-01 15:02:22.948342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.624 [2024-12-01 15:02:22.948370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:76320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.624 [2024-12-01 15:02:22.948382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.624 [2024-12-01 15:02:22.948395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:76328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.625 [2024-12-01 15:02:22.948406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.625 [2024-12-01 15:02:22.948421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:76336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.625 [2024-12-01 15:02:22.948433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.625 [2024-12-01 15:02:22.948447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:76344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.625 [2024-12-01 15:02:22.948458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.625 [2024-12-01 15:02:22.948471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:76352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.625 [2024-12-01 15:02:22.948483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.625 [2024-12-01 15:02:22.948495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:76360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.625 [2024-12-01 15:02:22.948512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.625 [2024-12-01 15:02:22.948526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:76368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.625 [2024-12-01 15:02:22.948539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.625 [2024-12-01 15:02:22.948551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:75728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.625 [2024-12-01 15:02:22.948562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.625 [2024-12-01 15:02:22.948576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:75752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.625 [2024-12-01 15:02:22.948587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.625 [2024-12-01 15:02:22.948600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:75760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.625 [2024-12-01 15:02:22.948612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.625 [2024-12-01 15:02:22.948624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:75784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.625 [2024-12-01 15:02:22.948635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.625 [2024-12-01 15:02:22.948648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:75792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.625 [2024-12-01 15:02:22.948660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.625 [2024-12-01 15:02:22.948673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:75808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.625 [2024-12-01 15:02:22.948684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.625 [2024-12-01 15:02:22.948697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:75816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.625 [2024-12-01 15:02:22.948708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.625 [2024-12-01 15:02:22.948721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:75824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.625 [2024-12-01 15:02:22.948733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.625 [2024-12-01 15:02:22.948745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:76376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.625 [2024-12-01 15:02:22.948757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.625 [2024-12-01 15:02:22.948773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:76384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.625 [2024-12-01 15:02:22.948785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.625 [2024-12-01 15:02:22.948798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:76392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.625 [2024-12-01 15:02:22.948809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.625 [2024-12-01 15:02:22.948829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:76400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.625 [2024-12-01 15:02:22.948841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.625 [2024-12-01 15:02:22.948865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:76408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.625 [2024-12-01 15:02:22.948885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.625 [2024-12-01 15:02:22.948898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:76416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.625 [2024-12-01 15:02:22.948920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.625 [2024-12-01 15:02:22.948932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:76424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.625 [2024-12-01 15:02:22.948943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.625 [2024-12-01 15:02:22.948961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:76432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.625 [2024-12-01 15:02:22.948972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.625 [2024-12-01 15:02:22.948987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:76440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.625 [2024-12-01 15:02:22.948998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.625 [2024-12-01 15:02:22.949011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:76448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.625 [2024-12-01 15:02:22.949038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.625 [2024-12-01 15:02:22.949051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:76456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.625 [2024-12-01 15:02:22.949063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.625 [2024-12-01 15:02:22.949076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:76464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.625 [2024-12-01 15:02:22.949088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.625 [2024-12-01 15:02:22.949101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:76472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.625 [2024-12-01 15:02:22.949112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.625 [2024-12-01 15:02:22.949125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:76480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.625 [2024-12-01 15:02:22.949137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.625 [2024-12-01 15:02:22.949165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:76488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.625 [2024-12-01 15:02:22.949177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.625 [2024-12-01 15:02:22.949206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:75848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.625 [2024-12-01 15:02:22.949225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.625 [2024-12-01 15:02:22.949240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:75872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.625 [2024-12-01 15:02:22.949253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.625 [2024-12-01 15:02:22.949271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:75888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.625 [2024-12-01 15:02:22.949284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.625 [2024-12-01 15:02:22.949298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:75896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.625 [2024-12-01 15:02:22.949311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.625 [2024-12-01 15:02:22.949328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:75912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.625 [2024-12-01 15:02:22.949351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.625 [2024-12-01 15:02:22.949399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:75920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.625 [2024-12-01 15:02:22.949413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.625 [2024-12-01 15:02:22.949427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:75952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.625 [2024-12-01 15:02:22.949439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.625 [2024-12-01 15:02:22.949453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:75968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.625 [2024-12-01 15:02:22.949466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.625 [2024-12-01 15:02:22.949480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:76496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.625 [2024-12-01 15:02:22.949493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.625 [2024-12-01 15:02:22.949507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:76504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.625 [2024-12-01 15:02:22.949520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.625 [2024-12-01 15:02:22.949534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:76512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.625 [2024-12-01 15:02:22.949557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.625 [2024-12-01 15:02:22.949571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:76520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.625 [2024-12-01 15:02:22.949625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.625 [2024-12-01 15:02:22.949651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:76528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.625 [2024-12-01 15:02:22.949663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.625 [2024-12-01 15:02:22.949680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:76536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.625 [2024-12-01 15:02:22.949706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.625 [2024-12-01 15:02:22.949721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:76544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.625 [2024-12-01 15:02:22.949733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.625 [2024-12-01 15:02:22.949746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:75976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.625 [2024-12-01 15:02:22.949759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.625 [2024-12-01 15:02:22.949772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:75992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.625 [2024-12-01 15:02:22.949784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.625 [2024-12-01 15:02:22.949797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:76000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.626 [2024-12-01 15:02:22.949818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.626 [2024-12-01 15:02:22.949841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:76008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.626 [2024-12-01 15:02:22.949854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.626 [2024-12-01 15:02:22.949868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:76016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.626 [2024-12-01 15:02:22.949880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.626 [2024-12-01 15:02:22.949898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:76024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.626 [2024-12-01 15:02:22.949911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.626 [2024-12-01 15:02:22.949939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:76040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.626 [2024-12-01 15:02:22.949951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.626 [2024-12-01 15:02:22.949963] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee0210 is same with the state(5) to be set 00:21:56.626 [2024-12-01 15:02:22.949977] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:56.626 [2024-12-01 15:02:22.949987] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:56.626 [2024-12-01 15:02:22.949995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:76048 len:8 PRP1 0x0 PRP2 0x0 00:21:56.626 [2024-12-01 15:02:22.950006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.626 [2024-12-01 15:02:22.950041] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1ee0210 was disconnected and freed. reset controller. 00:21:56.626 [2024-12-01 15:02:22.950056] bdev_nvme.c:1843:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4422 to 10.0.0.2:4420 00:21:56.626 [2024-12-01 15:02:22.950115] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:56.626 [2024-12-01 15:02:22.950142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.626 [2024-12-01 15:02:22.950167] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:56.626 [2024-12-01 15:02:22.950179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.626 [2024-12-01 15:02:22.950190] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:56.626 [2024-12-01 15:02:22.950201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.626 [2024-12-01 15:02:22.950213] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:56.626 [2024-12-01 15:02:22.950224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.626 [2024-12-01 15:02:22.950235] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:56.626 [2024-12-01 15:02:22.950276] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e59cb0 (9): Bad file descriptor 00:21:56.626 [2024-12-01 15:02:22.952324] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:56.626 [2024-12-01 15:02:22.982998] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:21:56.626 00:21:56.626 Latency(us) 00:21:56.626 [2024-12-01T15:02:29.741Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:56.626 [2024-12-01T15:02:29.741Z] Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:21:56.626 Verification LBA range: start 0x0 length 0x4000 00:21:56.626 NVMe0n1 : 15.01 15091.72 58.95 274.87 0.00 8314.83 551.10 14298.76 00:21:56.626 [2024-12-01T15:02:29.741Z] =================================================================================================================== 00:21:56.626 [2024-12-01T15:02:29.741Z] Total : 15091.72 58.95 274.87 0.00 8314.83 551.10 14298.76 00:21:56.626 Received shutdown signal, test time was about 15.000000 seconds 00:21:56.626 00:21:56.626 Latency(us) 00:21:56.626 [2024-12-01T15:02:29.741Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:56.626 [2024-12-01T15:02:29.741Z] =================================================================================================================== 00:21:56.626 [2024-12-01T15:02:29.741Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:56.626 15:02:28 -- host/failover.sh@65 -- # grep -c 'Resetting controller successful' 00:21:56.626 15:02:28 -- host/failover.sh@65 -- # count=3 00:21:56.626 15:02:28 -- host/failover.sh@67 -- # (( count != 3 )) 00:21:56.626 15:02:28 -- host/failover.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 1 -f 00:21:56.626 15:02:28 -- host/failover.sh@73 -- # bdevperf_pid=95886 00:21:56.626 15:02:28 -- host/failover.sh@75 -- # waitforlisten 95886 /var/tmp/bdevperf.sock 00:21:56.626 15:02:28 -- common/autotest_common.sh@829 -- # '[' -z 95886 ']' 00:21:56.626 15:02:28 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:56.626 15:02:28 -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:56.626 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:56.626 15:02:28 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:56.626 15:02:28 -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:56.626 15:02:28 -- common/autotest_common.sh@10 -- # set +x 00:21:56.885 15:02:29 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:56.885 15:02:29 -- common/autotest_common.sh@862 -- # return 0 00:21:56.885 15:02:29 -- host/failover.sh@76 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:21:57.144 [2024-12-01 15:02:30.057444] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:21:57.144 15:02:30 -- host/failover.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:21:57.403 [2024-12-01 15:02:30.353866] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:21:57.404 15:02:30 -- host/failover.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:21:57.662 NVMe0n1 00:21:57.663 15:02:30 -- host/failover.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:21:57.921 00:21:57.921 15:02:30 -- host/failover.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:21:58.179 00:21:58.179 15:02:31 -- host/failover.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:21:58.179 15:02:31 -- host/failover.sh@82 -- # grep -q NVMe0 00:21:58.437 15:02:31 -- host/failover.sh@84 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:21:58.695 15:02:31 -- host/failover.sh@87 -- # sleep 3 00:22:01.981 15:02:34 -- host/failover.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:22:01.981 15:02:34 -- host/failover.sh@88 -- # grep -q NVMe0 00:22:01.981 15:02:35 -- host/failover.sh@89 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:22:01.981 15:02:35 -- host/failover.sh@90 -- # run_test_pid=96024 00:22:01.981 15:02:35 -- host/failover.sh@92 -- # wait 96024 00:22:03.357 0 00:22:03.357 15:02:36 -- host/failover.sh@94 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:22:03.357 [2024-12-01 15:02:28.910910] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:22:03.357 [2024-12-01 15:02:28.911602] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid95886 ] 00:22:03.357 [2024-12-01 15:02:29.047123] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:03.357 [2024-12-01 15:02:29.103908] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:22:03.357 [2024-12-01 15:02:31.707467] bdev_nvme.c:1843:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:22:03.357 [2024-12-01 15:02:31.707983] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:03.358 [2024-12-01 15:02:31.708076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.358 [2024-12-01 15:02:31.708168] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:03.358 [2024-12-01 15:02:31.708237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.358 [2024-12-01 15:02:31.708301] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:03.358 [2024-12-01 15:02:31.708373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.358 [2024-12-01 15:02:31.708429] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:03.358 [2024-12-01 15:02:31.708495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.358 [2024-12-01 15:02:31.708549] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:03.358 [2024-12-01 15:02:31.708641] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:03.358 [2024-12-01 15:02:31.708718] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22eacb0 (9): Bad file descriptor 00:22:03.358 [2024-12-01 15:02:31.719000] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:22:03.358 Running I/O for 1 seconds... 00:22:03.358 00:22:03.358 Latency(us) 00:22:03.358 [2024-12-01T15:02:36.473Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:03.358 [2024-12-01T15:02:36.473Z] Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:22:03.358 Verification LBA range: start 0x0 length 0x4000 00:22:03.358 NVMe0n1 : 1.01 15394.80 60.14 0.00 0.00 8277.67 1131.99 10962.39 00:22:03.358 [2024-12-01T15:02:36.473Z] =================================================================================================================== 00:22:03.358 [2024-12-01T15:02:36.473Z] Total : 15394.80 60.14 0.00 0.00 8277.67 1131.99 10962.39 00:22:03.358 15:02:36 -- host/failover.sh@95 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:22:03.358 15:02:36 -- host/failover.sh@95 -- # grep -q NVMe0 00:22:03.358 15:02:36 -- host/failover.sh@98 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:22:03.616 15:02:36 -- host/failover.sh@99 -- # grep -q NVMe0 00:22:03.616 15:02:36 -- host/failover.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:22:03.875 15:02:36 -- host/failover.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:22:04.134 15:02:37 -- host/failover.sh@101 -- # sleep 3 00:22:07.422 15:02:40 -- host/failover.sh@103 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:22:07.422 15:02:40 -- host/failover.sh@103 -- # grep -q NVMe0 00:22:07.422 15:02:40 -- host/failover.sh@108 -- # killprocess 95886 00:22:07.422 15:02:40 -- common/autotest_common.sh@936 -- # '[' -z 95886 ']' 00:22:07.422 15:02:40 -- common/autotest_common.sh@940 -- # kill -0 95886 00:22:07.422 15:02:40 -- common/autotest_common.sh@941 -- # uname 00:22:07.422 15:02:40 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:22:07.422 15:02:40 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 95886 00:22:07.422 15:02:40 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:22:07.422 15:02:40 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:22:07.422 killing process with pid 95886 00:22:07.422 15:02:40 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 95886' 00:22:07.422 15:02:40 -- common/autotest_common.sh@955 -- # kill 95886 00:22:07.422 15:02:40 -- common/autotest_common.sh@960 -- # wait 95886 00:22:07.681 15:02:40 -- host/failover.sh@110 -- # sync 00:22:07.681 15:02:40 -- host/failover.sh@111 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:22:07.940 15:02:40 -- host/failover.sh@113 -- # trap - SIGINT SIGTERM EXIT 00:22:07.940 15:02:40 -- host/failover.sh@115 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:22:07.940 15:02:40 -- host/failover.sh@116 -- # nvmftestfini 00:22:07.940 15:02:40 -- nvmf/common.sh@476 -- # nvmfcleanup 00:22:07.940 15:02:40 -- nvmf/common.sh@116 -- # sync 00:22:07.940 15:02:40 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:22:07.940 15:02:40 -- nvmf/common.sh@119 -- # set +e 00:22:07.940 15:02:40 -- nvmf/common.sh@120 -- # for i in {1..20} 00:22:07.940 15:02:40 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:22:07.940 rmmod nvme_tcp 00:22:07.940 rmmod nvme_fabrics 00:22:07.940 rmmod nvme_keyring 00:22:07.940 15:02:40 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:22:07.940 15:02:40 -- nvmf/common.sh@123 -- # set -e 00:22:07.940 15:02:40 -- nvmf/common.sh@124 -- # return 0 00:22:07.940 15:02:40 -- nvmf/common.sh@477 -- # '[' -n 95516 ']' 00:22:07.940 15:02:40 -- nvmf/common.sh@478 -- # killprocess 95516 00:22:07.940 15:02:40 -- common/autotest_common.sh@936 -- # '[' -z 95516 ']' 00:22:07.940 15:02:40 -- common/autotest_common.sh@940 -- # kill -0 95516 00:22:07.940 15:02:40 -- common/autotest_common.sh@941 -- # uname 00:22:07.940 15:02:40 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:22:07.940 15:02:40 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 95516 00:22:07.940 15:02:40 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:22:07.940 15:02:40 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:22:07.940 killing process with pid 95516 00:22:07.940 15:02:40 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 95516' 00:22:07.940 15:02:40 -- common/autotest_common.sh@955 -- # kill 95516 00:22:07.940 15:02:40 -- common/autotest_common.sh@960 -- # wait 95516 00:22:08.199 15:02:41 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:22:08.199 15:02:41 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:22:08.199 15:02:41 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:22:08.199 15:02:41 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:22:08.199 15:02:41 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:22:08.199 15:02:41 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:08.199 15:02:41 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:08.199 15:02:41 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:08.459 15:02:41 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:22:08.459 00:22:08.459 real 0m32.736s 00:22:08.459 user 2m5.990s 00:22:08.459 sys 0m5.297s 00:22:08.459 15:02:41 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:22:08.459 ************************************ 00:22:08.459 15:02:41 -- common/autotest_common.sh@10 -- # set +x 00:22:08.459 END TEST nvmf_failover 00:22:08.459 ************************************ 00:22:08.459 15:02:41 -- nvmf/nvmf.sh@101 -- # run_test nvmf_discovery /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:22:08.459 15:02:41 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:22:08.459 15:02:41 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:22:08.459 15:02:41 -- common/autotest_common.sh@10 -- # set +x 00:22:08.459 ************************************ 00:22:08.459 START TEST nvmf_discovery 00:22:08.459 ************************************ 00:22:08.459 15:02:41 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:22:08.459 * Looking for test storage... 00:22:08.459 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:22:08.459 15:02:41 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:22:08.459 15:02:41 -- common/autotest_common.sh@1690 -- # lcov --version 00:22:08.459 15:02:41 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:22:08.459 15:02:41 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:22:08.459 15:02:41 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:22:08.459 15:02:41 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:22:08.459 15:02:41 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:22:08.459 15:02:41 -- scripts/common.sh@335 -- # IFS=.-: 00:22:08.459 15:02:41 -- scripts/common.sh@335 -- # read -ra ver1 00:22:08.459 15:02:41 -- scripts/common.sh@336 -- # IFS=.-: 00:22:08.459 15:02:41 -- scripts/common.sh@336 -- # read -ra ver2 00:22:08.459 15:02:41 -- scripts/common.sh@337 -- # local 'op=<' 00:22:08.459 15:02:41 -- scripts/common.sh@339 -- # ver1_l=2 00:22:08.459 15:02:41 -- scripts/common.sh@340 -- # ver2_l=1 00:22:08.459 15:02:41 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:22:08.459 15:02:41 -- scripts/common.sh@343 -- # case "$op" in 00:22:08.459 15:02:41 -- scripts/common.sh@344 -- # : 1 00:22:08.459 15:02:41 -- scripts/common.sh@363 -- # (( v = 0 )) 00:22:08.459 15:02:41 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:08.459 15:02:41 -- scripts/common.sh@364 -- # decimal 1 00:22:08.459 15:02:41 -- scripts/common.sh@352 -- # local d=1 00:22:08.459 15:02:41 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:08.459 15:02:41 -- scripts/common.sh@354 -- # echo 1 00:22:08.459 15:02:41 -- scripts/common.sh@364 -- # ver1[v]=1 00:22:08.459 15:02:41 -- scripts/common.sh@365 -- # decimal 2 00:22:08.459 15:02:41 -- scripts/common.sh@352 -- # local d=2 00:22:08.459 15:02:41 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:22:08.459 15:02:41 -- scripts/common.sh@354 -- # echo 2 00:22:08.459 15:02:41 -- scripts/common.sh@365 -- # ver2[v]=2 00:22:08.459 15:02:41 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:22:08.459 15:02:41 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:22:08.459 15:02:41 -- scripts/common.sh@367 -- # return 0 00:22:08.459 15:02:41 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:08.459 15:02:41 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:22:08.459 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:08.459 --rc genhtml_branch_coverage=1 00:22:08.459 --rc genhtml_function_coverage=1 00:22:08.459 --rc genhtml_legend=1 00:22:08.459 --rc geninfo_all_blocks=1 00:22:08.459 --rc geninfo_unexecuted_blocks=1 00:22:08.459 00:22:08.459 ' 00:22:08.459 15:02:41 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:22:08.459 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:08.459 --rc genhtml_branch_coverage=1 00:22:08.459 --rc genhtml_function_coverage=1 00:22:08.459 --rc genhtml_legend=1 00:22:08.459 --rc geninfo_all_blocks=1 00:22:08.459 --rc geninfo_unexecuted_blocks=1 00:22:08.459 00:22:08.459 ' 00:22:08.459 15:02:41 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:22:08.459 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:08.459 --rc genhtml_branch_coverage=1 00:22:08.459 --rc genhtml_function_coverage=1 00:22:08.459 --rc genhtml_legend=1 00:22:08.459 --rc geninfo_all_blocks=1 00:22:08.459 --rc geninfo_unexecuted_blocks=1 00:22:08.459 00:22:08.459 ' 00:22:08.459 15:02:41 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:22:08.459 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:08.459 --rc genhtml_branch_coverage=1 00:22:08.459 --rc genhtml_function_coverage=1 00:22:08.459 --rc genhtml_legend=1 00:22:08.459 --rc geninfo_all_blocks=1 00:22:08.459 --rc geninfo_unexecuted_blocks=1 00:22:08.459 00:22:08.459 ' 00:22:08.459 15:02:41 -- host/discovery.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:22:08.459 15:02:41 -- nvmf/common.sh@7 -- # uname -s 00:22:08.459 15:02:41 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:08.459 15:02:41 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:08.459 15:02:41 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:08.459 15:02:41 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:08.459 15:02:41 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:08.459 15:02:41 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:08.459 15:02:41 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:08.459 15:02:41 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:08.459 15:02:41 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:08.459 15:02:41 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:08.459 15:02:41 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:2d843004-a791-47f3-8dd7-3d04462c368b 00:22:08.460 15:02:41 -- nvmf/common.sh@18 -- # NVME_HOSTID=2d843004-a791-47f3-8dd7-3d04462c368b 00:22:08.460 15:02:41 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:08.460 15:02:41 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:08.460 15:02:41 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:22:08.460 15:02:41 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:22:08.460 15:02:41 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:08.460 15:02:41 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:08.460 15:02:41 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:08.460 15:02:41 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:08.460 15:02:41 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:08.460 15:02:41 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:08.460 15:02:41 -- paths/export.sh@5 -- # export PATH 00:22:08.460 15:02:41 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:08.460 15:02:41 -- nvmf/common.sh@46 -- # : 0 00:22:08.460 15:02:41 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:22:08.460 15:02:41 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:22:08.460 15:02:41 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:22:08.460 15:02:41 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:08.460 15:02:41 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:08.460 15:02:41 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:22:08.460 15:02:41 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:22:08.460 15:02:41 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:22:08.460 15:02:41 -- host/discovery.sh@11 -- # '[' tcp == rdma ']' 00:22:08.460 15:02:41 -- host/discovery.sh@16 -- # DISCOVERY_PORT=8009 00:22:08.460 15:02:41 -- host/discovery.sh@17 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:22:08.460 15:02:41 -- host/discovery.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode 00:22:08.460 15:02:41 -- host/discovery.sh@22 -- # HOST_NQN=nqn.2021-12.io.spdk:test 00:22:08.460 15:02:41 -- host/discovery.sh@23 -- # HOST_SOCK=/tmp/host.sock 00:22:08.460 15:02:41 -- host/discovery.sh@25 -- # nvmftestinit 00:22:08.460 15:02:41 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:22:08.460 15:02:41 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:08.460 15:02:41 -- nvmf/common.sh@436 -- # prepare_net_devs 00:22:08.460 15:02:41 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:22:08.460 15:02:41 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:22:08.460 15:02:41 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:08.460 15:02:41 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:08.460 15:02:41 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:08.460 15:02:41 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:22:08.460 15:02:41 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:22:08.460 15:02:41 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:22:08.460 15:02:41 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:22:08.460 15:02:41 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:22:08.460 15:02:41 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:22:08.460 15:02:41 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:08.460 15:02:41 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:08.460 15:02:41 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:22:08.460 15:02:41 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:22:08.460 15:02:41 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:22:08.460 15:02:41 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:22:08.460 15:02:41 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:22:08.460 15:02:41 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:08.460 15:02:41 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:22:08.460 15:02:41 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:22:08.460 15:02:41 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:22:08.460 15:02:41 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:22:08.460 15:02:41 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:22:08.720 15:02:41 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:22:08.720 Cannot find device "nvmf_tgt_br" 00:22:08.720 15:02:41 -- nvmf/common.sh@154 -- # true 00:22:08.720 15:02:41 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:22:08.720 Cannot find device "nvmf_tgt_br2" 00:22:08.720 15:02:41 -- nvmf/common.sh@155 -- # true 00:22:08.720 15:02:41 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:22:08.720 15:02:41 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:22:08.720 Cannot find device "nvmf_tgt_br" 00:22:08.720 15:02:41 -- nvmf/common.sh@157 -- # true 00:22:08.720 15:02:41 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:22:08.720 Cannot find device "nvmf_tgt_br2" 00:22:08.720 15:02:41 -- nvmf/common.sh@158 -- # true 00:22:08.720 15:02:41 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:22:08.720 15:02:41 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:22:08.720 15:02:41 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:22:08.720 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:22:08.720 15:02:41 -- nvmf/common.sh@161 -- # true 00:22:08.720 15:02:41 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:22:08.720 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:22:08.720 15:02:41 -- nvmf/common.sh@162 -- # true 00:22:08.720 15:02:41 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:22:08.720 15:02:41 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:22:08.720 15:02:41 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:22:08.720 15:02:41 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:22:08.720 15:02:41 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:22:08.720 15:02:41 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:22:08.720 15:02:41 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:22:08.720 15:02:41 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:22:08.720 15:02:41 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:22:08.720 15:02:41 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:22:08.720 15:02:41 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:22:08.720 15:02:41 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:22:08.720 15:02:41 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:22:08.720 15:02:41 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:22:08.720 15:02:41 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:22:08.720 15:02:41 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:22:08.980 15:02:41 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:22:08.980 15:02:41 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:22:08.980 15:02:41 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:22:08.980 15:02:41 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:22:08.980 15:02:41 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:22:08.980 15:02:41 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:22:08.980 15:02:41 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:22:08.980 15:02:41 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:22:08.980 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:08.980 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.065 ms 00:22:08.980 00:22:08.980 --- 10.0.0.2 ping statistics --- 00:22:08.980 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:08.980 rtt min/avg/max/mdev = 0.065/0.065/0.065/0.000 ms 00:22:08.980 15:02:41 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:22:08.980 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:22:08.980 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.060 ms 00:22:08.980 00:22:08.980 --- 10.0.0.3 ping statistics --- 00:22:08.980 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:08.980 rtt min/avg/max/mdev = 0.060/0.060/0.060/0.000 ms 00:22:08.980 15:02:41 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:22:08.980 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:08.980 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.037 ms 00:22:08.980 00:22:08.980 --- 10.0.0.1 ping statistics --- 00:22:08.980 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:08.980 rtt min/avg/max/mdev = 0.037/0.037/0.037/0.000 ms 00:22:08.980 15:02:41 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:08.980 15:02:41 -- nvmf/common.sh@421 -- # return 0 00:22:08.980 15:02:41 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:22:08.980 15:02:41 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:08.980 15:02:41 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:22:08.980 15:02:41 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:22:08.980 15:02:41 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:08.980 15:02:41 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:22:08.980 15:02:41 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:22:08.980 15:02:41 -- host/discovery.sh@30 -- # nvmfappstart -m 0x2 00:22:08.980 15:02:41 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:22:08.980 15:02:41 -- common/autotest_common.sh@722 -- # xtrace_disable 00:22:08.980 15:02:41 -- common/autotest_common.sh@10 -- # set +x 00:22:08.980 15:02:41 -- nvmf/common.sh@469 -- # nvmfpid=96337 00:22:08.980 15:02:41 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:22:08.980 15:02:41 -- nvmf/common.sh@470 -- # waitforlisten 96337 00:22:08.980 15:02:41 -- common/autotest_common.sh@829 -- # '[' -z 96337 ']' 00:22:08.980 15:02:41 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:08.980 15:02:41 -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:08.980 15:02:41 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:08.980 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:08.980 15:02:41 -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:08.980 15:02:41 -- common/autotest_common.sh@10 -- # set +x 00:22:08.980 [2024-12-01 15:02:41.995043] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:22:08.980 [2024-12-01 15:02:41.995126] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:09.239 [2024-12-01 15:02:42.134659] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:09.239 [2024-12-01 15:02:42.214269] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:22:09.239 [2024-12-01 15:02:42.214454] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:09.239 [2024-12-01 15:02:42.214470] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:09.239 [2024-12-01 15:02:42.214480] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:09.239 [2024-12-01 15:02:42.214511] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:22:10.176 15:02:42 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:10.176 15:02:42 -- common/autotest_common.sh@862 -- # return 0 00:22:10.176 15:02:42 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:22:10.176 15:02:42 -- common/autotest_common.sh@728 -- # xtrace_disable 00:22:10.176 15:02:42 -- common/autotest_common.sh@10 -- # set +x 00:22:10.176 15:02:42 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:10.176 15:02:42 -- host/discovery.sh@32 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:22:10.176 15:02:42 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:10.176 15:02:42 -- common/autotest_common.sh@10 -- # set +x 00:22:10.176 [2024-12-01 15:02:43.001280] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:10.176 15:02:43 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:10.176 15:02:43 -- host/discovery.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.2 -s 8009 00:22:10.176 15:02:43 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:10.176 15:02:43 -- common/autotest_common.sh@10 -- # set +x 00:22:10.176 [2024-12-01 15:02:43.009464] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:22:10.176 15:02:43 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:10.176 15:02:43 -- host/discovery.sh@35 -- # rpc_cmd bdev_null_create null0 1000 512 00:22:10.176 15:02:43 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:10.176 15:02:43 -- common/autotest_common.sh@10 -- # set +x 00:22:10.176 null0 00:22:10.176 15:02:43 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:10.176 15:02:43 -- host/discovery.sh@36 -- # rpc_cmd bdev_null_create null1 1000 512 00:22:10.176 15:02:43 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:10.176 15:02:43 -- common/autotest_common.sh@10 -- # set +x 00:22:10.176 null1 00:22:10.176 15:02:43 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:10.176 15:02:43 -- host/discovery.sh@37 -- # rpc_cmd bdev_wait_for_examine 00:22:10.176 15:02:43 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:10.176 15:02:43 -- common/autotest_common.sh@10 -- # set +x 00:22:10.176 15:02:43 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:10.176 15:02:43 -- host/discovery.sh@45 -- # hostpid=96387 00:22:10.176 15:02:43 -- host/discovery.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock 00:22:10.176 15:02:43 -- host/discovery.sh@46 -- # waitforlisten 96387 /tmp/host.sock 00:22:10.176 15:02:43 -- common/autotest_common.sh@829 -- # '[' -z 96387 ']' 00:22:10.176 15:02:43 -- common/autotest_common.sh@833 -- # local rpc_addr=/tmp/host.sock 00:22:10.176 15:02:43 -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:10.176 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:22:10.176 15:02:43 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:22:10.176 15:02:43 -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:10.176 15:02:43 -- common/autotest_common.sh@10 -- # set +x 00:22:10.176 [2024-12-01 15:02:43.095431] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:22:10.176 [2024-12-01 15:02:43.095536] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid96387 ] 00:22:10.176 [2024-12-01 15:02:43.240286] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:10.435 [2024-12-01 15:02:43.303220] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:22:10.435 [2024-12-01 15:02:43.303412] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:22:11.003 15:02:44 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:11.003 15:02:44 -- common/autotest_common.sh@862 -- # return 0 00:22:11.003 15:02:44 -- host/discovery.sh@48 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:22:11.003 15:02:44 -- host/discovery.sh@50 -- # rpc_cmd -s /tmp/host.sock log_set_flag bdev_nvme 00:22:11.003 15:02:44 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:11.003 15:02:44 -- common/autotest_common.sh@10 -- # set +x 00:22:11.262 15:02:44 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:11.262 15:02:44 -- host/discovery.sh@51 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test 00:22:11.262 15:02:44 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:11.262 15:02:44 -- common/autotest_common.sh@10 -- # set +x 00:22:11.262 15:02:44 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:11.262 15:02:44 -- host/discovery.sh@72 -- # notify_id=0 00:22:11.262 15:02:44 -- host/discovery.sh@78 -- # get_subsystem_names 00:22:11.262 15:02:44 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:22:11.262 15:02:44 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:22:11.262 15:02:44 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:11.262 15:02:44 -- common/autotest_common.sh@10 -- # set +x 00:22:11.262 15:02:44 -- host/discovery.sh@59 -- # sort 00:22:11.262 15:02:44 -- host/discovery.sh@59 -- # xargs 00:22:11.262 15:02:44 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:11.262 15:02:44 -- host/discovery.sh@78 -- # [[ '' == '' ]] 00:22:11.262 15:02:44 -- host/discovery.sh@79 -- # get_bdev_list 00:22:11.262 15:02:44 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:11.262 15:02:44 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:22:11.262 15:02:44 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:11.262 15:02:44 -- host/discovery.sh@55 -- # sort 00:22:11.262 15:02:44 -- common/autotest_common.sh@10 -- # set +x 00:22:11.262 15:02:44 -- host/discovery.sh@55 -- # xargs 00:22:11.262 15:02:44 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:11.262 15:02:44 -- host/discovery.sh@79 -- # [[ '' == '' ]] 00:22:11.262 15:02:44 -- host/discovery.sh@81 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 00:22:11.262 15:02:44 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:11.262 15:02:44 -- common/autotest_common.sh@10 -- # set +x 00:22:11.262 15:02:44 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:11.262 15:02:44 -- host/discovery.sh@82 -- # get_subsystem_names 00:22:11.262 15:02:44 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:22:11.262 15:02:44 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:11.262 15:02:44 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:22:11.262 15:02:44 -- common/autotest_common.sh@10 -- # set +x 00:22:11.262 15:02:44 -- host/discovery.sh@59 -- # sort 00:22:11.262 15:02:44 -- host/discovery.sh@59 -- # xargs 00:22:11.262 15:02:44 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:11.262 15:02:44 -- host/discovery.sh@82 -- # [[ '' == '' ]] 00:22:11.262 15:02:44 -- host/discovery.sh@83 -- # get_bdev_list 00:22:11.262 15:02:44 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:11.262 15:02:44 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:11.262 15:02:44 -- common/autotest_common.sh@10 -- # set +x 00:22:11.262 15:02:44 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:22:11.262 15:02:44 -- host/discovery.sh@55 -- # sort 00:22:11.262 15:02:44 -- host/discovery.sh@55 -- # xargs 00:22:11.262 15:02:44 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:11.262 15:02:44 -- host/discovery.sh@83 -- # [[ '' == '' ]] 00:22:11.262 15:02:44 -- host/discovery.sh@85 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 00:22:11.263 15:02:44 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:11.263 15:02:44 -- common/autotest_common.sh@10 -- # set +x 00:22:11.263 15:02:44 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:11.263 15:02:44 -- host/discovery.sh@86 -- # get_subsystem_names 00:22:11.263 15:02:44 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:22:11.263 15:02:44 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:22:11.263 15:02:44 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:11.263 15:02:44 -- common/autotest_common.sh@10 -- # set +x 00:22:11.263 15:02:44 -- host/discovery.sh@59 -- # xargs 00:22:11.263 15:02:44 -- host/discovery.sh@59 -- # sort 00:22:11.522 15:02:44 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:11.522 15:02:44 -- host/discovery.sh@86 -- # [[ '' == '' ]] 00:22:11.522 15:02:44 -- host/discovery.sh@87 -- # get_bdev_list 00:22:11.522 15:02:44 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:11.522 15:02:44 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:22:11.522 15:02:44 -- host/discovery.sh@55 -- # sort 00:22:11.522 15:02:44 -- host/discovery.sh@55 -- # xargs 00:22:11.522 15:02:44 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:11.522 15:02:44 -- common/autotest_common.sh@10 -- # set +x 00:22:11.522 15:02:44 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:11.522 15:02:44 -- host/discovery.sh@87 -- # [[ '' == '' ]] 00:22:11.522 15:02:44 -- host/discovery.sh@91 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:22:11.522 15:02:44 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:11.522 15:02:44 -- common/autotest_common.sh@10 -- # set +x 00:22:11.522 [2024-12-01 15:02:44.485855] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:11.522 15:02:44 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:11.522 15:02:44 -- host/discovery.sh@92 -- # get_subsystem_names 00:22:11.522 15:02:44 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:22:11.522 15:02:44 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:22:11.522 15:02:44 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:11.522 15:02:44 -- common/autotest_common.sh@10 -- # set +x 00:22:11.522 15:02:44 -- host/discovery.sh@59 -- # sort 00:22:11.522 15:02:44 -- host/discovery.sh@59 -- # xargs 00:22:11.522 15:02:44 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:11.522 15:02:44 -- host/discovery.sh@92 -- # [[ '' == '' ]] 00:22:11.522 15:02:44 -- host/discovery.sh@93 -- # get_bdev_list 00:22:11.522 15:02:44 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:11.522 15:02:44 -- host/discovery.sh@55 -- # sort 00:22:11.522 15:02:44 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:22:11.522 15:02:44 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:11.522 15:02:44 -- host/discovery.sh@55 -- # xargs 00:22:11.522 15:02:44 -- common/autotest_common.sh@10 -- # set +x 00:22:11.522 15:02:44 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:11.522 15:02:44 -- host/discovery.sh@93 -- # [[ '' == '' ]] 00:22:11.522 15:02:44 -- host/discovery.sh@94 -- # get_notification_count 00:22:11.522 15:02:44 -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:22:11.522 15:02:44 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:11.522 15:02:44 -- common/autotest_common.sh@10 -- # set +x 00:22:11.522 15:02:44 -- host/discovery.sh@74 -- # jq '. | length' 00:22:11.522 15:02:44 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:11.781 15:02:44 -- host/discovery.sh@74 -- # notification_count=0 00:22:11.781 15:02:44 -- host/discovery.sh@75 -- # notify_id=0 00:22:11.781 15:02:44 -- host/discovery.sh@95 -- # [[ 0 == 0 ]] 00:22:11.781 15:02:44 -- host/discovery.sh@99 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2021-12.io.spdk:test 00:22:11.781 15:02:44 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:11.781 15:02:44 -- common/autotest_common.sh@10 -- # set +x 00:22:11.781 15:02:44 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:11.781 15:02:44 -- host/discovery.sh@100 -- # sleep 1 00:22:12.040 [2024-12-01 15:02:45.134547] bdev_nvme.c:6759:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:22:12.040 [2024-12-01 15:02:45.134575] bdev_nvme.c:6839:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:22:12.040 [2024-12-01 15:02:45.134591] bdev_nvme.c:6722:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:22:12.298 [2024-12-01 15:02:45.220659] bdev_nvme.c:6688:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:22:12.298 [2024-12-01 15:02:45.276248] bdev_nvme.c:6578:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:22:12.298 [2024-12-01 15:02:45.276282] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:22:12.555 15:02:45 -- host/discovery.sh@101 -- # get_subsystem_names 00:22:12.555 15:02:45 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:22:12.555 15:02:45 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:22:12.555 15:02:45 -- host/discovery.sh@59 -- # sort 00:22:12.555 15:02:45 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:12.555 15:02:45 -- common/autotest_common.sh@10 -- # set +x 00:22:12.555 15:02:45 -- host/discovery.sh@59 -- # xargs 00:22:12.814 15:02:45 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:12.814 15:02:45 -- host/discovery.sh@101 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:12.814 15:02:45 -- host/discovery.sh@102 -- # get_bdev_list 00:22:12.814 15:02:45 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:12.814 15:02:45 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:12.814 15:02:45 -- common/autotest_common.sh@10 -- # set +x 00:22:12.814 15:02:45 -- host/discovery.sh@55 -- # sort 00:22:12.814 15:02:45 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:22:12.814 15:02:45 -- host/discovery.sh@55 -- # xargs 00:22:12.814 15:02:45 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:12.814 15:02:45 -- host/discovery.sh@102 -- # [[ nvme0n1 == \n\v\m\e\0\n\1 ]] 00:22:12.814 15:02:45 -- host/discovery.sh@103 -- # get_subsystem_paths nvme0 00:22:12.814 15:02:45 -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:22:12.814 15:02:45 -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:22:12.814 15:02:45 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:12.814 15:02:45 -- common/autotest_common.sh@10 -- # set +x 00:22:12.814 15:02:45 -- host/discovery.sh@63 -- # sort -n 00:22:12.814 15:02:45 -- host/discovery.sh@63 -- # xargs 00:22:12.814 15:02:45 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:12.814 15:02:45 -- host/discovery.sh@103 -- # [[ 4420 == \4\4\2\0 ]] 00:22:12.814 15:02:45 -- host/discovery.sh@104 -- # get_notification_count 00:22:12.814 15:02:45 -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:22:12.814 15:02:45 -- host/discovery.sh@74 -- # jq '. | length' 00:22:12.814 15:02:45 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:12.814 15:02:45 -- common/autotest_common.sh@10 -- # set +x 00:22:12.814 15:02:45 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:12.814 15:02:45 -- host/discovery.sh@74 -- # notification_count=1 00:22:12.814 15:02:45 -- host/discovery.sh@75 -- # notify_id=1 00:22:12.814 15:02:45 -- host/discovery.sh@105 -- # [[ 1 == 1 ]] 00:22:12.814 15:02:45 -- host/discovery.sh@108 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null1 00:22:12.814 15:02:45 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:12.814 15:02:45 -- common/autotest_common.sh@10 -- # set +x 00:22:12.814 15:02:45 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:12.814 15:02:45 -- host/discovery.sh@109 -- # sleep 1 00:22:14.189 15:02:46 -- host/discovery.sh@110 -- # get_bdev_list 00:22:14.189 15:02:46 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:14.189 15:02:46 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:14.189 15:02:46 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:22:14.189 15:02:46 -- common/autotest_common.sh@10 -- # set +x 00:22:14.189 15:02:46 -- host/discovery.sh@55 -- # sort 00:22:14.189 15:02:46 -- host/discovery.sh@55 -- # xargs 00:22:14.189 15:02:46 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:14.189 15:02:46 -- host/discovery.sh@110 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:22:14.189 15:02:46 -- host/discovery.sh@111 -- # get_notification_count 00:22:14.189 15:02:46 -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 1 00:22:14.189 15:02:46 -- host/discovery.sh@74 -- # jq '. | length' 00:22:14.189 15:02:46 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:14.189 15:02:46 -- common/autotest_common.sh@10 -- # set +x 00:22:14.189 15:02:46 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:14.189 15:02:46 -- host/discovery.sh@74 -- # notification_count=1 00:22:14.189 15:02:46 -- host/discovery.sh@75 -- # notify_id=2 00:22:14.189 15:02:46 -- host/discovery.sh@112 -- # [[ 1 == 1 ]] 00:22:14.189 15:02:46 -- host/discovery.sh@116 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 00:22:14.189 15:02:46 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:14.189 15:02:46 -- common/autotest_common.sh@10 -- # set +x 00:22:14.189 [2024-12-01 15:02:46.998670] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:22:14.190 [2024-12-01 15:02:46.999340] bdev_nvme.c:6741:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:22:14.190 [2024-12-01 15:02:46.999368] bdev_nvme.c:6722:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:22:14.190 15:02:47 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:14.190 15:02:47 -- host/discovery.sh@117 -- # sleep 1 00:22:14.190 [2024-12-01 15:02:47.085421] bdev_nvme.c:6683:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new path for nvme0 00:22:14.190 [2024-12-01 15:02:47.146786] bdev_nvme.c:6578:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:22:14.190 [2024-12-01 15:02:47.146940] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:22:14.190 [2024-12-01 15:02:47.147043] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:22:15.125 15:02:48 -- host/discovery.sh@118 -- # get_subsystem_names 00:22:15.125 15:02:48 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:22:15.125 15:02:48 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:22:15.125 15:02:48 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:15.125 15:02:48 -- common/autotest_common.sh@10 -- # set +x 00:22:15.125 15:02:48 -- host/discovery.sh@59 -- # xargs 00:22:15.125 15:02:48 -- host/discovery.sh@59 -- # sort 00:22:15.125 15:02:48 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:15.125 15:02:48 -- host/discovery.sh@118 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:15.125 15:02:48 -- host/discovery.sh@119 -- # get_bdev_list 00:22:15.125 15:02:48 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:15.125 15:02:48 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:22:15.125 15:02:48 -- host/discovery.sh@55 -- # sort 00:22:15.125 15:02:48 -- host/discovery.sh@55 -- # xargs 00:22:15.125 15:02:48 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:15.125 15:02:48 -- common/autotest_common.sh@10 -- # set +x 00:22:15.125 15:02:48 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:15.125 15:02:48 -- host/discovery.sh@119 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:22:15.125 15:02:48 -- host/discovery.sh@120 -- # get_subsystem_paths nvme0 00:22:15.125 15:02:48 -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:22:15.126 15:02:48 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:15.126 15:02:48 -- common/autotest_common.sh@10 -- # set +x 00:22:15.126 15:02:48 -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:22:15.126 15:02:48 -- host/discovery.sh@63 -- # sort -n 00:22:15.126 15:02:48 -- host/discovery.sh@63 -- # xargs 00:22:15.126 15:02:48 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:15.126 15:02:48 -- host/discovery.sh@120 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:22:15.126 15:02:48 -- host/discovery.sh@121 -- # get_notification_count 00:22:15.126 15:02:48 -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:22:15.126 15:02:48 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:15.126 15:02:48 -- common/autotest_common.sh@10 -- # set +x 00:22:15.126 15:02:48 -- host/discovery.sh@74 -- # jq '. | length' 00:22:15.126 15:02:48 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:15.126 15:02:48 -- host/discovery.sh@74 -- # notification_count=0 00:22:15.126 15:02:48 -- host/discovery.sh@75 -- # notify_id=2 00:22:15.126 15:02:48 -- host/discovery.sh@122 -- # [[ 0 == 0 ]] 00:22:15.126 15:02:48 -- host/discovery.sh@126 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:22:15.126 15:02:48 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:15.126 15:02:48 -- common/autotest_common.sh@10 -- # set +x 00:22:15.126 [2024-12-01 15:02:48.227556] bdev_nvme.c:6741:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:22:15.126 [2024-12-01 15:02:48.227583] bdev_nvme.c:6722:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:22:15.126 15:02:48 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:15.126 15:02:48 -- host/discovery.sh@127 -- # sleep 1 00:22:15.126 [2024-12-01 15:02:48.234132] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:15.126 [2024-12-01 15:02:48.234196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.126 [2024-12-01 15:02:48.234208] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:15.126 [2024-12-01 15:02:48.234216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.126 [2024-12-01 15:02:48.234240] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:15.126 [2024-12-01 15:02:48.234248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.126 [2024-12-01 15:02:48.234274] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:15.126 [2024-12-01 15:02:48.234282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.126 [2024-12-01 15:02:48.234291] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88d570 is same with the state(5) to be set 00:22:15.385 [2024-12-01 15:02:48.244066] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x88d570 (9): Bad file descriptor 00:22:15.385 [2024-12-01 15:02:48.254083] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:22:15.385 [2024-12-01 15:02:48.254167] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:15.385 [2024-12-01 15:02:48.254210] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:15.385 [2024-12-01 15:02:48.254225] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88d570 with addr=10.0.0.2, port=4420 00:22:15.385 [2024-12-01 15:02:48.254234] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88d570 is same with the state(5) to be set 00:22:15.385 [2024-12-01 15:02:48.254250] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x88d570 (9): Bad file descriptor 00:22:15.385 [2024-12-01 15:02:48.254262] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:22:15.385 [2024-12-01 15:02:48.254270] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:22:15.385 [2024-12-01 15:02:48.254279] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:22:15.385 [2024-12-01 15:02:48.254292] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:15.385 [2024-12-01 15:02:48.264129] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:22:15.385 [2024-12-01 15:02:48.264199] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:15.385 [2024-12-01 15:02:48.264239] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:15.385 [2024-12-01 15:02:48.264253] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88d570 with addr=10.0.0.2, port=4420 00:22:15.385 [2024-12-01 15:02:48.264262] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88d570 is same with the state(5) to be set 00:22:15.385 [2024-12-01 15:02:48.264276] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x88d570 (9): Bad file descriptor 00:22:15.385 [2024-12-01 15:02:48.264288] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:22:15.385 [2024-12-01 15:02:48.264296] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:22:15.385 [2024-12-01 15:02:48.264304] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:22:15.385 [2024-12-01 15:02:48.264317] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:15.385 [2024-12-01 15:02:48.274174] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:22:15.385 [2024-12-01 15:02:48.274250] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:15.385 [2024-12-01 15:02:48.274294] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:15.385 [2024-12-01 15:02:48.274309] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88d570 with addr=10.0.0.2, port=4420 00:22:15.385 [2024-12-01 15:02:48.274318] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88d570 is same with the state(5) to be set 00:22:15.385 [2024-12-01 15:02:48.274332] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x88d570 (9): Bad file descriptor 00:22:15.385 [2024-12-01 15:02:48.274345] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:22:15.385 [2024-12-01 15:02:48.274353] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:22:15.385 [2024-12-01 15:02:48.274360] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:22:15.385 [2024-12-01 15:02:48.274373] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:15.385 [2024-12-01 15:02:48.284220] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:22:15.385 [2024-12-01 15:02:48.284291] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:15.385 [2024-12-01 15:02:48.284331] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:15.385 [2024-12-01 15:02:48.284346] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88d570 with addr=10.0.0.2, port=4420 00:22:15.385 [2024-12-01 15:02:48.284355] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88d570 is same with the state(5) to be set 00:22:15.385 [2024-12-01 15:02:48.284369] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x88d570 (9): Bad file descriptor 00:22:15.385 [2024-12-01 15:02:48.284381] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:22:15.385 [2024-12-01 15:02:48.284388] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:22:15.385 [2024-12-01 15:02:48.284396] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:22:15.385 [2024-12-01 15:02:48.284408] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:15.385 [2024-12-01 15:02:48.294264] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:22:15.385 [2024-12-01 15:02:48.294333] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:15.386 [2024-12-01 15:02:48.294372] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:15.386 [2024-12-01 15:02:48.294387] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88d570 with addr=10.0.0.2, port=4420 00:22:15.386 [2024-12-01 15:02:48.294396] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88d570 is same with the state(5) to be set 00:22:15.386 [2024-12-01 15:02:48.294409] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x88d570 (9): Bad file descriptor 00:22:15.386 [2024-12-01 15:02:48.294421] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:22:15.386 [2024-12-01 15:02:48.294429] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:22:15.386 [2024-12-01 15:02:48.294437] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:22:15.386 [2024-12-01 15:02:48.294449] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:15.386 [2024-12-01 15:02:48.304307] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:22:15.386 [2024-12-01 15:02:48.304377] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:15.386 [2024-12-01 15:02:48.304416] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:15.386 [2024-12-01 15:02:48.304430] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88d570 with addr=10.0.0.2, port=4420 00:22:15.386 [2024-12-01 15:02:48.304445] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88d570 is same with the state(5) to be set 00:22:15.386 [2024-12-01 15:02:48.304459] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x88d570 (9): Bad file descriptor 00:22:15.386 [2024-12-01 15:02:48.304471] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:22:15.386 [2024-12-01 15:02:48.304479] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:22:15.386 [2024-12-01 15:02:48.304486] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:22:15.386 [2024-12-01 15:02:48.304498] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:15.386 [2024-12-01 15:02:48.313641] bdev_nvme.c:6546:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 not found 00:22:15.386 [2024-12-01 15:02:48.313879] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:22:16.321 15:02:49 -- host/discovery.sh@128 -- # get_subsystem_names 00:22:16.321 15:02:49 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:22:16.321 15:02:49 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:16.321 15:02:49 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:22:16.321 15:02:49 -- common/autotest_common.sh@10 -- # set +x 00:22:16.321 15:02:49 -- host/discovery.sh@59 -- # sort 00:22:16.321 15:02:49 -- host/discovery.sh@59 -- # xargs 00:22:16.321 15:02:49 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:16.321 15:02:49 -- host/discovery.sh@128 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:16.321 15:02:49 -- host/discovery.sh@129 -- # get_bdev_list 00:22:16.321 15:02:49 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:16.321 15:02:49 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:16.321 15:02:49 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:22:16.321 15:02:49 -- common/autotest_common.sh@10 -- # set +x 00:22:16.321 15:02:49 -- host/discovery.sh@55 -- # sort 00:22:16.321 15:02:49 -- host/discovery.sh@55 -- # xargs 00:22:16.321 15:02:49 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:16.321 15:02:49 -- host/discovery.sh@129 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:22:16.321 15:02:49 -- host/discovery.sh@130 -- # get_subsystem_paths nvme0 00:22:16.321 15:02:49 -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:22:16.321 15:02:49 -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:22:16.321 15:02:49 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:16.321 15:02:49 -- common/autotest_common.sh@10 -- # set +x 00:22:16.321 15:02:49 -- host/discovery.sh@63 -- # xargs 00:22:16.321 15:02:49 -- host/discovery.sh@63 -- # sort -n 00:22:16.321 15:02:49 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:16.321 15:02:49 -- host/discovery.sh@130 -- # [[ 4421 == \4\4\2\1 ]] 00:22:16.321 15:02:49 -- host/discovery.sh@131 -- # get_notification_count 00:22:16.321 15:02:49 -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:22:16.321 15:02:49 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:16.321 15:02:49 -- host/discovery.sh@74 -- # jq '. | length' 00:22:16.321 15:02:49 -- common/autotest_common.sh@10 -- # set +x 00:22:16.321 15:02:49 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:16.580 15:02:49 -- host/discovery.sh@74 -- # notification_count=0 00:22:16.580 15:02:49 -- host/discovery.sh@75 -- # notify_id=2 00:22:16.580 15:02:49 -- host/discovery.sh@132 -- # [[ 0 == 0 ]] 00:22:16.580 15:02:49 -- host/discovery.sh@134 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_discovery -b nvme 00:22:16.580 15:02:49 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:16.580 15:02:49 -- common/autotest_common.sh@10 -- # set +x 00:22:16.580 15:02:49 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:16.580 15:02:49 -- host/discovery.sh@135 -- # sleep 1 00:22:17.516 15:02:50 -- host/discovery.sh@136 -- # get_subsystem_names 00:22:17.516 15:02:50 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:22:17.516 15:02:50 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:22:17.516 15:02:50 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:17.516 15:02:50 -- common/autotest_common.sh@10 -- # set +x 00:22:17.516 15:02:50 -- host/discovery.sh@59 -- # sort 00:22:17.516 15:02:50 -- host/discovery.sh@59 -- # xargs 00:22:17.516 15:02:50 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:17.516 15:02:50 -- host/discovery.sh@136 -- # [[ '' == '' ]] 00:22:17.516 15:02:50 -- host/discovery.sh@137 -- # get_bdev_list 00:22:17.516 15:02:50 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:17.516 15:02:50 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:17.516 15:02:50 -- common/autotest_common.sh@10 -- # set +x 00:22:17.516 15:02:50 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:22:17.516 15:02:50 -- host/discovery.sh@55 -- # sort 00:22:17.516 15:02:50 -- host/discovery.sh@55 -- # xargs 00:22:17.516 15:02:50 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:17.516 15:02:50 -- host/discovery.sh@137 -- # [[ '' == '' ]] 00:22:17.516 15:02:50 -- host/discovery.sh@138 -- # get_notification_count 00:22:17.516 15:02:50 -- host/discovery.sh@74 -- # jq '. | length' 00:22:17.516 15:02:50 -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:22:17.516 15:02:50 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:17.516 15:02:50 -- common/autotest_common.sh@10 -- # set +x 00:22:17.516 15:02:50 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:17.776 15:02:50 -- host/discovery.sh@74 -- # notification_count=2 00:22:17.776 15:02:50 -- host/discovery.sh@75 -- # notify_id=4 00:22:17.776 15:02:50 -- host/discovery.sh@139 -- # [[ 2 == 2 ]] 00:22:17.776 15:02:50 -- host/discovery.sh@142 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:22:17.776 15:02:50 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:17.776 15:02:50 -- common/autotest_common.sh@10 -- # set +x 00:22:18.716 [2024-12-01 15:02:51.645212] bdev_nvme.c:6759:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:22:18.716 [2024-12-01 15:02:51.645231] bdev_nvme.c:6839:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:22:18.716 [2024-12-01 15:02:51.645245] bdev_nvme.c:6722:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:22:18.716 [2024-12-01 15:02:51.731282] bdev_nvme.c:6688:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new subsystem nvme0 00:22:18.716 [2024-12-01 15:02:51.790042] bdev_nvme.c:6578:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:22:18.716 [2024-12-01 15:02:51.790073] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:22:18.716 15:02:51 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:18.716 15:02:51 -- host/discovery.sh@144 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:22:18.716 15:02:51 -- common/autotest_common.sh@650 -- # local es=0 00:22:18.716 15:02:51 -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:22:18.716 15:02:51 -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:22:18.716 15:02:51 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:18.716 15:02:51 -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:22:18.716 15:02:51 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:18.716 15:02:51 -- common/autotest_common.sh@653 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:22:18.716 15:02:51 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:18.716 15:02:51 -- common/autotest_common.sh@10 -- # set +x 00:22:18.716 2024/12/01 15:02:51 error on JSON-RPC call, method: bdev_nvme_start_discovery, params: map[adrfam:ipv4 hostnqn:nqn.2021-12.io.spdk:test name:nvme traddr:10.0.0.2 trsvcid:8009 trtype:tcp wait_for_attach:%!s(bool=true)], err: error received for bdev_nvme_start_discovery method, err: Code=-17 Msg=File exists 00:22:18.716 request: 00:22:18.716 { 00:22:18.716 "method": "bdev_nvme_start_discovery", 00:22:18.716 "params": { 00:22:18.716 "name": "nvme", 00:22:18.716 "trtype": "tcp", 00:22:18.716 "traddr": "10.0.0.2", 00:22:18.716 "hostnqn": "nqn.2021-12.io.spdk:test", 00:22:18.716 "adrfam": "ipv4", 00:22:18.716 "trsvcid": "8009", 00:22:18.716 "wait_for_attach": true 00:22:18.716 } 00:22:18.716 } 00:22:18.716 Got JSON-RPC error response 00:22:18.716 GoRPCClient: error on JSON-RPC call 00:22:18.716 15:02:51 -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:22:18.716 15:02:51 -- common/autotest_common.sh@653 -- # es=1 00:22:18.716 15:02:51 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:22:18.716 15:02:51 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:22:18.716 15:02:51 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:22:18.716 15:02:51 -- host/discovery.sh@146 -- # get_discovery_ctrlrs 00:22:18.716 15:02:51 -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:22:18.716 15:02:51 -- host/discovery.sh@67 -- # sort 00:22:18.716 15:02:51 -- host/discovery.sh@67 -- # jq -r '.[].name' 00:22:18.716 15:02:51 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:18.716 15:02:51 -- common/autotest_common.sh@10 -- # set +x 00:22:18.716 15:02:51 -- host/discovery.sh@67 -- # xargs 00:22:18.716 15:02:51 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:18.976 15:02:51 -- host/discovery.sh@146 -- # [[ nvme == \n\v\m\e ]] 00:22:18.976 15:02:51 -- host/discovery.sh@147 -- # get_bdev_list 00:22:18.976 15:02:51 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:18.976 15:02:51 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:18.976 15:02:51 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:22:18.976 15:02:51 -- common/autotest_common.sh@10 -- # set +x 00:22:18.976 15:02:51 -- host/discovery.sh@55 -- # sort 00:22:18.976 15:02:51 -- host/discovery.sh@55 -- # xargs 00:22:18.976 15:02:51 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:18.976 15:02:51 -- host/discovery.sh@147 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:22:18.976 15:02:51 -- host/discovery.sh@150 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:22:18.976 15:02:51 -- common/autotest_common.sh@650 -- # local es=0 00:22:18.976 15:02:51 -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:22:18.976 15:02:51 -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:22:18.976 15:02:51 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:18.976 15:02:51 -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:22:18.976 15:02:51 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:18.976 15:02:51 -- common/autotest_common.sh@653 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:22:18.976 15:02:51 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:18.976 15:02:51 -- common/autotest_common.sh@10 -- # set +x 00:22:18.976 2024/12/01 15:02:51 error on JSON-RPC call, method: bdev_nvme_start_discovery, params: map[adrfam:ipv4 hostnqn:nqn.2021-12.io.spdk:test name:nvme_second traddr:10.0.0.2 trsvcid:8009 trtype:tcp wait_for_attach:%!s(bool=true)], err: error received for bdev_nvme_start_discovery method, err: Code=-17 Msg=File exists 00:22:18.976 request: 00:22:18.976 { 00:22:18.976 "method": "bdev_nvme_start_discovery", 00:22:18.976 "params": { 00:22:18.976 "name": "nvme_second", 00:22:18.976 "trtype": "tcp", 00:22:18.976 "traddr": "10.0.0.2", 00:22:18.976 "hostnqn": "nqn.2021-12.io.spdk:test", 00:22:18.976 "adrfam": "ipv4", 00:22:18.976 "trsvcid": "8009", 00:22:18.976 "wait_for_attach": true 00:22:18.976 } 00:22:18.976 } 00:22:18.976 Got JSON-RPC error response 00:22:18.976 GoRPCClient: error on JSON-RPC call 00:22:18.976 15:02:51 -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:22:18.976 15:02:51 -- common/autotest_common.sh@653 -- # es=1 00:22:18.976 15:02:51 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:22:18.976 15:02:51 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:22:18.976 15:02:51 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:22:18.976 15:02:51 -- host/discovery.sh@152 -- # get_discovery_ctrlrs 00:22:18.976 15:02:51 -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:22:18.976 15:02:51 -- host/discovery.sh@67 -- # jq -r '.[].name' 00:22:18.976 15:02:51 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:18.976 15:02:51 -- host/discovery.sh@67 -- # sort 00:22:18.976 15:02:51 -- common/autotest_common.sh@10 -- # set +x 00:22:18.976 15:02:51 -- host/discovery.sh@67 -- # xargs 00:22:18.976 15:02:51 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:18.976 15:02:51 -- host/discovery.sh@152 -- # [[ nvme == \n\v\m\e ]] 00:22:18.976 15:02:51 -- host/discovery.sh@153 -- # get_bdev_list 00:22:18.976 15:02:51 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:18.976 15:02:51 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:22:18.976 15:02:52 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:18.976 15:02:52 -- common/autotest_common.sh@10 -- # set +x 00:22:18.976 15:02:52 -- host/discovery.sh@55 -- # sort 00:22:18.976 15:02:52 -- host/discovery.sh@55 -- # xargs 00:22:18.976 15:02:52 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:18.976 15:02:52 -- host/discovery.sh@153 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:22:18.976 15:02:52 -- host/discovery.sh@156 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:22:18.976 15:02:52 -- common/autotest_common.sh@650 -- # local es=0 00:22:18.976 15:02:52 -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:22:18.976 15:02:52 -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:22:18.976 15:02:52 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:18.976 15:02:52 -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:22:18.976 15:02:52 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:18.976 15:02:52 -- common/autotest_common.sh@653 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:22:18.976 15:02:52 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:18.976 15:02:52 -- common/autotest_common.sh@10 -- # set +x 00:22:20.350 [2024-12-01 15:02:53.052488] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:20.350 [2024-12-01 15:02:53.052698] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:20.350 [2024-12-01 15:02:53.052723] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x928f80 with addr=10.0.0.2, port=8010 00:22:20.350 [2024-12-01 15:02:53.052739] nvme_tcp.c:2596:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:22:20.350 [2024-12-01 15:02:53.052747] nvme.c: 821:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:22:20.350 [2024-12-01 15:02:53.052790] bdev_nvme.c:6821:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:22:21.283 [2024-12-01 15:02:54.052477] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:21.283 [2024-12-01 15:02:54.052540] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:21.283 [2024-12-01 15:02:54.052556] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x901ca0 with addr=10.0.0.2, port=8010 00:22:21.283 [2024-12-01 15:02:54.052568] nvme_tcp.c:2596:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:22:21.283 [2024-12-01 15:02:54.052575] nvme.c: 821:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:22:21.283 [2024-12-01 15:02:54.052582] bdev_nvme.c:6821:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:22:22.217 [2024-12-01 15:02:55.052411] bdev_nvme.c:6802:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] timed out while attaching discovery ctrlr 00:22:22.217 2024/12/01 15:02:55 error on JSON-RPC call, method: bdev_nvme_start_discovery, params: map[adrfam:ipv4 attach_timeout_ms:3000 hostnqn:nqn.2021-12.io.spdk:test name:nvme_second traddr:10.0.0.2 trsvcid:8010 trtype:tcp], err: error received for bdev_nvme_start_discovery method, err: Code=-110 Msg=Connection timed out 00:22:22.217 request: 00:22:22.217 { 00:22:22.217 "method": "bdev_nvme_start_discovery", 00:22:22.217 "params": { 00:22:22.217 "name": "nvme_second", 00:22:22.217 "trtype": "tcp", 00:22:22.217 "traddr": "10.0.0.2", 00:22:22.217 "hostnqn": "nqn.2021-12.io.spdk:test", 00:22:22.217 "adrfam": "ipv4", 00:22:22.217 "trsvcid": "8010", 00:22:22.217 "attach_timeout_ms": 3000 00:22:22.217 } 00:22:22.217 } 00:22:22.217 Got JSON-RPC error response 00:22:22.217 GoRPCClient: error on JSON-RPC call 00:22:22.217 15:02:55 -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:22:22.217 15:02:55 -- common/autotest_common.sh@653 -- # es=1 00:22:22.217 15:02:55 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:22:22.217 15:02:55 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:22:22.217 15:02:55 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:22:22.217 15:02:55 -- host/discovery.sh@158 -- # get_discovery_ctrlrs 00:22:22.217 15:02:55 -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:22:22.217 15:02:55 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:22.217 15:02:55 -- host/discovery.sh@67 -- # jq -r '.[].name' 00:22:22.217 15:02:55 -- common/autotest_common.sh@10 -- # set +x 00:22:22.217 15:02:55 -- host/discovery.sh@67 -- # sort 00:22:22.217 15:02:55 -- host/discovery.sh@67 -- # xargs 00:22:22.217 15:02:55 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:22.217 15:02:55 -- host/discovery.sh@158 -- # [[ nvme == \n\v\m\e ]] 00:22:22.217 15:02:55 -- host/discovery.sh@160 -- # trap - SIGINT SIGTERM EXIT 00:22:22.217 15:02:55 -- host/discovery.sh@162 -- # kill 96387 00:22:22.217 15:02:55 -- host/discovery.sh@163 -- # nvmftestfini 00:22:22.217 15:02:55 -- nvmf/common.sh@476 -- # nvmfcleanup 00:22:22.217 15:02:55 -- nvmf/common.sh@116 -- # sync 00:22:22.217 15:02:55 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:22:22.217 15:02:55 -- nvmf/common.sh@119 -- # set +e 00:22:22.217 15:02:55 -- nvmf/common.sh@120 -- # for i in {1..20} 00:22:22.217 15:02:55 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:22:22.217 rmmod nvme_tcp 00:22:22.217 rmmod nvme_fabrics 00:22:22.217 rmmod nvme_keyring 00:22:22.217 15:02:55 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:22:22.217 15:02:55 -- nvmf/common.sh@123 -- # set -e 00:22:22.217 15:02:55 -- nvmf/common.sh@124 -- # return 0 00:22:22.217 15:02:55 -- nvmf/common.sh@477 -- # '[' -n 96337 ']' 00:22:22.217 15:02:55 -- nvmf/common.sh@478 -- # killprocess 96337 00:22:22.217 15:02:55 -- common/autotest_common.sh@936 -- # '[' -z 96337 ']' 00:22:22.217 15:02:55 -- common/autotest_common.sh@940 -- # kill -0 96337 00:22:22.217 15:02:55 -- common/autotest_common.sh@941 -- # uname 00:22:22.217 15:02:55 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:22:22.217 15:02:55 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 96337 00:22:22.217 killing process with pid 96337 00:22:22.217 15:02:55 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:22:22.218 15:02:55 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:22:22.218 15:02:55 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 96337' 00:22:22.218 15:02:55 -- common/autotest_common.sh@955 -- # kill 96337 00:22:22.218 15:02:55 -- common/autotest_common.sh@960 -- # wait 96337 00:22:22.474 15:02:55 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:22:22.474 15:02:55 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:22:22.474 15:02:55 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:22:22.474 15:02:55 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:22:22.474 15:02:55 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:22:22.474 15:02:55 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:22.474 15:02:55 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:22.474 15:02:55 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:22.474 15:02:55 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:22:22.474 ************************************ 00:22:22.474 END TEST nvmf_discovery 00:22:22.474 ************************************ 00:22:22.474 00:22:22.474 real 0m14.199s 00:22:22.474 user 0m27.567s 00:22:22.474 sys 0m1.734s 00:22:22.474 15:02:55 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:22:22.474 15:02:55 -- common/autotest_common.sh@10 -- # set +x 00:22:22.733 15:02:55 -- nvmf/nvmf.sh@102 -- # run_test nvmf_discovery_remove_ifc /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:22:22.733 15:02:55 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:22:22.733 15:02:55 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:22:22.733 15:02:55 -- common/autotest_common.sh@10 -- # set +x 00:22:22.733 ************************************ 00:22:22.733 START TEST nvmf_discovery_remove_ifc 00:22:22.733 ************************************ 00:22:22.733 15:02:55 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:22:22.733 * Looking for test storage... 00:22:22.733 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:22:22.733 15:02:55 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:22:22.733 15:02:55 -- common/autotest_common.sh@1690 -- # lcov --version 00:22:22.733 15:02:55 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:22:22.733 15:02:55 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:22:22.733 15:02:55 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:22:22.733 15:02:55 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:22:22.733 15:02:55 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:22:22.733 15:02:55 -- scripts/common.sh@335 -- # IFS=.-: 00:22:22.733 15:02:55 -- scripts/common.sh@335 -- # read -ra ver1 00:22:22.733 15:02:55 -- scripts/common.sh@336 -- # IFS=.-: 00:22:22.733 15:02:55 -- scripts/common.sh@336 -- # read -ra ver2 00:22:22.733 15:02:55 -- scripts/common.sh@337 -- # local 'op=<' 00:22:22.733 15:02:55 -- scripts/common.sh@339 -- # ver1_l=2 00:22:22.733 15:02:55 -- scripts/common.sh@340 -- # ver2_l=1 00:22:22.733 15:02:55 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:22:22.733 15:02:55 -- scripts/common.sh@343 -- # case "$op" in 00:22:22.733 15:02:55 -- scripts/common.sh@344 -- # : 1 00:22:22.733 15:02:55 -- scripts/common.sh@363 -- # (( v = 0 )) 00:22:22.733 15:02:55 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:22.733 15:02:55 -- scripts/common.sh@364 -- # decimal 1 00:22:22.733 15:02:55 -- scripts/common.sh@352 -- # local d=1 00:22:22.733 15:02:55 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:22.733 15:02:55 -- scripts/common.sh@354 -- # echo 1 00:22:22.733 15:02:55 -- scripts/common.sh@364 -- # ver1[v]=1 00:22:22.733 15:02:55 -- scripts/common.sh@365 -- # decimal 2 00:22:22.733 15:02:55 -- scripts/common.sh@352 -- # local d=2 00:22:22.733 15:02:55 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:22:22.733 15:02:55 -- scripts/common.sh@354 -- # echo 2 00:22:22.733 15:02:55 -- scripts/common.sh@365 -- # ver2[v]=2 00:22:22.733 15:02:55 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:22:22.733 15:02:55 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:22:22.733 15:02:55 -- scripts/common.sh@367 -- # return 0 00:22:22.733 15:02:55 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:22.733 15:02:55 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:22:22.733 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:22.733 --rc genhtml_branch_coverage=1 00:22:22.733 --rc genhtml_function_coverage=1 00:22:22.733 --rc genhtml_legend=1 00:22:22.733 --rc geninfo_all_blocks=1 00:22:22.733 --rc geninfo_unexecuted_blocks=1 00:22:22.733 00:22:22.733 ' 00:22:22.733 15:02:55 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:22:22.733 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:22.733 --rc genhtml_branch_coverage=1 00:22:22.733 --rc genhtml_function_coverage=1 00:22:22.733 --rc genhtml_legend=1 00:22:22.733 --rc geninfo_all_blocks=1 00:22:22.733 --rc geninfo_unexecuted_blocks=1 00:22:22.733 00:22:22.733 ' 00:22:22.733 15:02:55 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:22:22.733 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:22.733 --rc genhtml_branch_coverage=1 00:22:22.733 --rc genhtml_function_coverage=1 00:22:22.733 --rc genhtml_legend=1 00:22:22.733 --rc geninfo_all_blocks=1 00:22:22.733 --rc geninfo_unexecuted_blocks=1 00:22:22.733 00:22:22.733 ' 00:22:22.733 15:02:55 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:22:22.733 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:22.733 --rc genhtml_branch_coverage=1 00:22:22.733 --rc genhtml_function_coverage=1 00:22:22.733 --rc genhtml_legend=1 00:22:22.733 --rc geninfo_all_blocks=1 00:22:22.733 --rc geninfo_unexecuted_blocks=1 00:22:22.733 00:22:22.733 ' 00:22:22.733 15:02:55 -- host/discovery_remove_ifc.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:22:22.733 15:02:55 -- nvmf/common.sh@7 -- # uname -s 00:22:22.733 15:02:55 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:22.733 15:02:55 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:22.733 15:02:55 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:22.733 15:02:55 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:22.733 15:02:55 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:22.733 15:02:55 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:22.733 15:02:55 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:22.733 15:02:55 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:22.733 15:02:55 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:22.733 15:02:55 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:22.733 15:02:55 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:2d843004-a791-47f3-8dd7-3d04462c368b 00:22:22.733 15:02:55 -- nvmf/common.sh@18 -- # NVME_HOSTID=2d843004-a791-47f3-8dd7-3d04462c368b 00:22:22.733 15:02:55 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:22.733 15:02:55 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:22.733 15:02:55 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:22:22.733 15:02:55 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:22:22.733 15:02:55 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:22.733 15:02:55 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:22.733 15:02:55 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:22.733 15:02:55 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:22.733 15:02:55 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:22.733 15:02:55 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:22.733 15:02:55 -- paths/export.sh@5 -- # export PATH 00:22:22.733 15:02:55 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:22.733 15:02:55 -- nvmf/common.sh@46 -- # : 0 00:22:22.733 15:02:55 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:22:22.734 15:02:55 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:22:22.734 15:02:55 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:22:22.734 15:02:55 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:22.734 15:02:55 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:22.734 15:02:55 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:22:22.734 15:02:55 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:22:22.734 15:02:55 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:22:22.734 15:02:55 -- host/discovery_remove_ifc.sh@14 -- # '[' tcp == rdma ']' 00:22:22.734 15:02:55 -- host/discovery_remove_ifc.sh@19 -- # discovery_port=8009 00:22:22.734 15:02:55 -- host/discovery_remove_ifc.sh@20 -- # discovery_nqn=nqn.2014-08.org.nvmexpress.discovery 00:22:22.734 15:02:55 -- host/discovery_remove_ifc.sh@23 -- # nqn=nqn.2016-06.io.spdk:cnode 00:22:22.734 15:02:55 -- host/discovery_remove_ifc.sh@25 -- # host_nqn=nqn.2021-12.io.spdk:test 00:22:22.734 15:02:55 -- host/discovery_remove_ifc.sh@26 -- # host_sock=/tmp/host.sock 00:22:22.734 15:02:55 -- host/discovery_remove_ifc.sh@39 -- # nvmftestinit 00:22:22.734 15:02:55 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:22:22.734 15:02:55 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:22.734 15:02:55 -- nvmf/common.sh@436 -- # prepare_net_devs 00:22:22.734 15:02:55 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:22:22.734 15:02:55 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:22:22.734 15:02:55 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:22.734 15:02:55 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:22.734 15:02:55 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:22.734 15:02:55 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:22:22.734 15:02:55 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:22:22.734 15:02:55 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:22:22.734 15:02:55 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:22:22.734 15:02:55 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:22:22.734 15:02:55 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:22:22.734 15:02:55 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:22.734 15:02:55 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:22.734 15:02:55 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:22:22.734 15:02:55 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:22:22.734 15:02:55 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:22:22.734 15:02:55 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:22:22.734 15:02:55 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:22:22.734 15:02:55 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:22.734 15:02:55 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:22:22.734 15:02:55 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:22:22.734 15:02:55 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:22:22.734 15:02:55 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:22:22.734 15:02:55 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:22:22.992 15:02:55 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:22:22.992 Cannot find device "nvmf_tgt_br" 00:22:22.992 15:02:55 -- nvmf/common.sh@154 -- # true 00:22:22.992 15:02:55 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:22:22.992 Cannot find device "nvmf_tgt_br2" 00:22:22.992 15:02:55 -- nvmf/common.sh@155 -- # true 00:22:22.992 15:02:55 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:22:22.992 15:02:55 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:22:22.992 Cannot find device "nvmf_tgt_br" 00:22:22.992 15:02:55 -- nvmf/common.sh@157 -- # true 00:22:22.992 15:02:55 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:22:22.992 Cannot find device "nvmf_tgt_br2" 00:22:22.992 15:02:55 -- nvmf/common.sh@158 -- # true 00:22:22.992 15:02:55 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:22:22.992 15:02:55 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:22:22.992 15:02:55 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:22:22.992 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:22:22.992 15:02:55 -- nvmf/common.sh@161 -- # true 00:22:22.992 15:02:55 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:22:22.993 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:22:22.993 15:02:55 -- nvmf/common.sh@162 -- # true 00:22:22.993 15:02:55 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:22:22.993 15:02:55 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:22:22.993 15:02:55 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:22:22.993 15:02:55 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:22:22.993 15:02:56 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:22:22.993 15:02:56 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:22:22.993 15:02:56 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:22:22.993 15:02:56 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:22:22.993 15:02:56 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:22:22.993 15:02:56 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:22:22.993 15:02:56 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:22:22.993 15:02:56 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:22:22.993 15:02:56 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:22:22.993 15:02:56 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:22:22.993 15:02:56 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:22:22.993 15:02:56 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:22:22.993 15:02:56 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:22:22.993 15:02:56 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:22:22.993 15:02:56 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:22:23.251 15:02:56 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:22:23.251 15:02:56 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:22:23.251 15:02:56 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:22:23.251 15:02:56 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:22:23.251 15:02:56 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:22:23.251 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:23.251 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.052 ms 00:22:23.251 00:22:23.251 --- 10.0.0.2 ping statistics --- 00:22:23.251 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:23.251 rtt min/avg/max/mdev = 0.052/0.052/0.052/0.000 ms 00:22:23.251 15:02:56 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:22:23.251 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:22:23.251 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.032 ms 00:22:23.251 00:22:23.251 --- 10.0.0.3 ping statistics --- 00:22:23.251 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:23.251 rtt min/avg/max/mdev = 0.032/0.032/0.032/0.000 ms 00:22:23.251 15:02:56 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:22:23.251 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:23.251 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.027 ms 00:22:23.251 00:22:23.251 --- 10.0.0.1 ping statistics --- 00:22:23.251 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:23.251 rtt min/avg/max/mdev = 0.027/0.027/0.027/0.000 ms 00:22:23.252 15:02:56 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:23.252 15:02:56 -- nvmf/common.sh@421 -- # return 0 00:22:23.252 15:02:56 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:22:23.252 15:02:56 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:23.252 15:02:56 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:22:23.252 15:02:56 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:22:23.252 15:02:56 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:23.252 15:02:56 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:22:23.252 15:02:56 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:22:23.252 15:02:56 -- host/discovery_remove_ifc.sh@40 -- # nvmfappstart -m 0x2 00:22:23.252 15:02:56 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:22:23.252 15:02:56 -- common/autotest_common.sh@722 -- # xtrace_disable 00:22:23.252 15:02:56 -- common/autotest_common.sh@10 -- # set +x 00:22:23.252 15:02:56 -- nvmf/common.sh@469 -- # nvmfpid=96901 00:22:23.252 15:02:56 -- nvmf/common.sh@470 -- # waitforlisten 96901 00:22:23.252 15:02:56 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:22:23.252 15:02:56 -- common/autotest_common.sh@829 -- # '[' -z 96901 ']' 00:22:23.252 15:02:56 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:23.252 15:02:56 -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:23.252 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:23.252 15:02:56 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:23.252 15:02:56 -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:23.252 15:02:56 -- common/autotest_common.sh@10 -- # set +x 00:22:23.252 [2024-12-01 15:02:56.249652] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:22:23.252 [2024-12-01 15:02:56.249744] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:23.510 [2024-12-01 15:02:56.383989] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:23.510 [2024-12-01 15:02:56.467470] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:22:23.510 [2024-12-01 15:02:56.467633] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:23.510 [2024-12-01 15:02:56.467646] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:23.510 [2024-12-01 15:02:56.467654] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:23.510 [2024-12-01 15:02:56.467684] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:22:24.090 15:02:57 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:24.090 15:02:57 -- common/autotest_common.sh@862 -- # return 0 00:22:24.090 15:02:57 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:22:24.090 15:02:57 -- common/autotest_common.sh@728 -- # xtrace_disable 00:22:24.090 15:02:57 -- common/autotest_common.sh@10 -- # set +x 00:22:24.349 15:02:57 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:24.349 15:02:57 -- host/discovery_remove_ifc.sh@43 -- # rpc_cmd 00:22:24.349 15:02:57 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:24.349 15:02:57 -- common/autotest_common.sh@10 -- # set +x 00:22:24.349 [2024-12-01 15:02:57.230585] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:24.350 [2024-12-01 15:02:57.238733] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:22:24.350 null0 00:22:24.350 [2024-12-01 15:02:57.270638] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:24.350 15:02:57 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:24.350 15:02:57 -- host/discovery_remove_ifc.sh@59 -- # hostpid=96947 00:22:24.350 15:02:57 -- host/discovery_remove_ifc.sh@60 -- # waitforlisten 96947 /tmp/host.sock 00:22:24.350 15:02:57 -- host/discovery_remove_ifc.sh@58 -- # /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock --wait-for-rpc -L bdev_nvme 00:22:24.350 15:02:57 -- common/autotest_common.sh@829 -- # '[' -z 96947 ']' 00:22:24.350 15:02:57 -- common/autotest_common.sh@833 -- # local rpc_addr=/tmp/host.sock 00:22:24.350 15:02:57 -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:24.350 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:22:24.350 15:02:57 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:22:24.350 15:02:57 -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:24.350 15:02:57 -- common/autotest_common.sh@10 -- # set +x 00:22:24.350 [2024-12-01 15:02:57.338007] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:22:24.350 [2024-12-01 15:02:57.338074] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid96947 ] 00:22:24.609 [2024-12-01 15:02:57.470644] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:24.609 [2024-12-01 15:02:57.524675] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:22:24.609 [2024-12-01 15:02:57.524825] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:22:24.609 15:02:57 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:24.609 15:02:57 -- common/autotest_common.sh@862 -- # return 0 00:22:24.609 15:02:57 -- host/discovery_remove_ifc.sh@62 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:22:24.609 15:02:57 -- host/discovery_remove_ifc.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_set_options -e 1 00:22:24.609 15:02:57 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:24.609 15:02:57 -- common/autotest_common.sh@10 -- # set +x 00:22:24.609 15:02:57 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:24.609 15:02:57 -- host/discovery_remove_ifc.sh@66 -- # rpc_cmd -s /tmp/host.sock framework_start_init 00:22:24.609 15:02:57 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:24.609 15:02:57 -- common/autotest_common.sh@10 -- # set +x 00:22:24.609 15:02:57 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:24.609 15:02:57 -- host/discovery_remove_ifc.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test --ctrlr-loss-timeout-sec 2 --reconnect-delay-sec 1 --fast-io-fail-timeout-sec 1 --wait-for-attach 00:22:24.609 15:02:57 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:24.609 15:02:57 -- common/autotest_common.sh@10 -- # set +x 00:22:25.987 [2024-12-01 15:02:58.693625] bdev_nvme.c:6759:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:22:25.987 [2024-12-01 15:02:58.693660] bdev_nvme.c:6839:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:22:25.987 [2024-12-01 15:02:58.693678] bdev_nvme.c:6722:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:22:25.987 [2024-12-01 15:02:58.780779] bdev_nvme.c:6688:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:22:25.987 [2024-12-01 15:02:58.844269] bdev_nvme.c:7548:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:22:25.987 [2024-12-01 15:02:58.844315] bdev_nvme.c:7548:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:22:25.987 [2024-12-01 15:02:58.844341] bdev_nvme.c:7548:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:22:25.987 [2024-12-01 15:02:58.844356] bdev_nvme.c:6578:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:22:25.987 [2024-12-01 15:02:58.844374] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:22:25.987 15:02:58 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:25.987 15:02:58 -- host/discovery_remove_ifc.sh@72 -- # wait_for_bdev nvme0n1 00:22:25.987 15:02:58 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:22:25.987 15:02:58 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:25.987 15:02:58 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:25.987 15:02:58 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:22:25.987 15:02:58 -- common/autotest_common.sh@10 -- # set +x 00:22:25.987 15:02:58 -- host/discovery_remove_ifc.sh@29 -- # sort 00:22:25.987 15:02:58 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:22:25.987 [2024-12-01 15:02:58.852060] bdev_nvme.c:1595:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0xcabda0 was disconnected and freed. delete nvme_qpair. 00:22:25.987 15:02:58 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:25.987 15:02:58 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != \n\v\m\e\0\n\1 ]] 00:22:25.987 15:02:58 -- host/discovery_remove_ifc.sh@75 -- # ip netns exec nvmf_tgt_ns_spdk ip addr del 10.0.0.2/24 dev nvmf_tgt_if 00:22:25.987 15:02:58 -- host/discovery_remove_ifc.sh@76 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if down 00:22:25.987 15:02:58 -- host/discovery_remove_ifc.sh@79 -- # wait_for_bdev '' 00:22:25.987 15:02:58 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:22:25.987 15:02:58 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:25.987 15:02:58 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:22:25.987 15:02:58 -- host/discovery_remove_ifc.sh@29 -- # sort 00:22:25.987 15:02:58 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:25.987 15:02:58 -- common/autotest_common.sh@10 -- # set +x 00:22:25.987 15:02:58 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:22:25.987 15:02:58 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:25.987 15:02:58 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:22:25.988 15:02:58 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:22:26.924 15:02:59 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:22:26.924 15:02:59 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:22:26.924 15:02:59 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:26.924 15:02:59 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:26.924 15:02:59 -- common/autotest_common.sh@10 -- # set +x 00:22:26.924 15:02:59 -- host/discovery_remove_ifc.sh@29 -- # sort 00:22:26.924 15:02:59 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:22:26.924 15:02:59 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:26.924 15:02:59 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:22:26.924 15:02:59 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:22:28.301 15:03:01 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:22:28.301 15:03:01 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:22:28.301 15:03:01 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:28.301 15:03:01 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:28.301 15:03:01 -- common/autotest_common.sh@10 -- # set +x 00:22:28.301 15:03:01 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:22:28.301 15:03:01 -- host/discovery_remove_ifc.sh@29 -- # sort 00:22:28.301 15:03:01 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:28.301 15:03:01 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:22:28.301 15:03:01 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:22:29.239 15:03:02 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:22:29.239 15:03:02 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:29.239 15:03:02 -- host/discovery_remove_ifc.sh@29 -- # sort 00:22:29.239 15:03:02 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:29.239 15:03:02 -- common/autotest_common.sh@10 -- # set +x 00:22:29.239 15:03:02 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:22:29.239 15:03:02 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:22:29.239 15:03:02 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:29.239 15:03:02 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:22:29.239 15:03:02 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:22:30.175 15:03:03 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:22:30.175 15:03:03 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:30.175 15:03:03 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:30.175 15:03:03 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:22:30.175 15:03:03 -- common/autotest_common.sh@10 -- # set +x 00:22:30.175 15:03:03 -- host/discovery_remove_ifc.sh@29 -- # sort 00:22:30.175 15:03:03 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:22:30.175 15:03:03 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:30.176 15:03:03 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:22:30.176 15:03:03 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:22:31.112 15:03:04 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:22:31.112 15:03:04 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:31.112 15:03:04 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:31.112 15:03:04 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:22:31.112 15:03:04 -- common/autotest_common.sh@10 -- # set +x 00:22:31.112 15:03:04 -- host/discovery_remove_ifc.sh@29 -- # sort 00:22:31.112 15:03:04 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:22:31.112 15:03:04 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:31.371 15:03:04 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:22:31.371 15:03:04 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:22:31.371 [2024-12-01 15:03:04.272483] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 110: Connection timed out 00:22:31.371 [2024-12-01 15:03:04.272528] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:31.371 [2024-12-01 15:03:04.272543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:31.371 [2024-12-01 15:03:04.272552] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:31.371 [2024-12-01 15:03:04.272560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:31.371 [2024-12-01 15:03:04.272568] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:31.371 [2024-12-01 15:03:04.272576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:31.371 [2024-12-01 15:03:04.272584] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:31.371 [2024-12-01 15:03:04.272592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:31.371 [2024-12-01 15:03:04.272600] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:22:31.371 [2024-12-01 15:03:04.272608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:31.371 [2024-12-01 15:03:04.272617] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc15690 is same with the state(5) to be set 00:22:31.371 [2024-12-01 15:03:04.282479] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc15690 (9): Bad file descriptor 00:22:31.371 [2024-12-01 15:03:04.292497] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:22:32.306 15:03:05 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:22:32.306 15:03:05 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:32.306 15:03:05 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:22:32.306 15:03:05 -- host/discovery_remove_ifc.sh@29 -- # sort 00:22:32.306 15:03:05 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:22:32.306 15:03:05 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:32.306 15:03:05 -- common/autotest_common.sh@10 -- # set +x 00:22:32.306 [2024-12-01 15:03:05.308876] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 110 00:22:33.241 [2024-12-01 15:03:06.332876] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 110 00:22:33.241 [2024-12-01 15:03:06.333233] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc15690 with addr=10.0.0.2, port=4420 00:22:33.241 [2024-12-01 15:03:06.333285] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc15690 is same with the state(5) to be set 00:22:33.241 [2024-12-01 15:03:06.333332] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:22:33.241 [2024-12-01 15:03:06.333356] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:22:33.241 [2024-12-01 15:03:06.333402] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:22:33.241 [2024-12-01 15:03:06.333431] nvme_ctrlr.c:1017:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] already in failed state 00:22:33.241 [2024-12-01 15:03:06.334189] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc15690 (9): Bad file descriptor 00:22:33.241 [2024-12-01 15:03:06.334252] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:33.241 [2024-12-01 15:03:06.334302] bdev_nvme.c:6510:remove_discovery_entry: *INFO*: Discovery[10.0.0.2:8009] Remove discovery entry: nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 00:22:33.241 [2024-12-01 15:03:06.334367] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:33.241 [2024-12-01 15:03:06.334398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.241 [2024-12-01 15:03:06.334424] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:33.241 [2024-12-01 15:03:06.334447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.241 [2024-12-01 15:03:06.334470] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:33.241 [2024-12-01 15:03:06.334491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.241 [2024-12-01 15:03:06.334513] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:33.241 [2024-12-01 15:03:06.334534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.241 [2024-12-01 15:03:06.334558] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:22:33.241 [2024-12-01 15:03:06.334578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.241 [2024-12-01 15:03:06.334599] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] in failed state. 00:22:33.241 [2024-12-01 15:03:06.334658] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc73410 (9): Bad file descriptor 00:22:33.241 [2024-12-01 15:03:06.335658] nvme_fabric.c: 214:nvme_fabric_prop_get_cmd_async: *ERROR*: Failed to send Property Get fabrics command 00:22:33.241 [2024-12-01 15:03:06.335705] nvme_ctrlr.c:1136:nvme_ctrlr_shutdown_async: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] Failed to read the CC register 00:22:33.241 15:03:06 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:33.501 15:03:06 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:22:33.501 15:03:06 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:22:34.437 15:03:07 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:22:34.437 15:03:07 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:34.437 15:03:07 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:22:34.437 15:03:07 -- host/discovery_remove_ifc.sh@29 -- # sort 00:22:34.437 15:03:07 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:34.437 15:03:07 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:22:34.437 15:03:07 -- common/autotest_common.sh@10 -- # set +x 00:22:34.437 15:03:07 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:34.437 15:03:07 -- host/discovery_remove_ifc.sh@33 -- # [[ '' != '' ]] 00:22:34.437 15:03:07 -- host/discovery_remove_ifc.sh@82 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:22:34.437 15:03:07 -- host/discovery_remove_ifc.sh@83 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:22:34.437 15:03:07 -- host/discovery_remove_ifc.sh@86 -- # wait_for_bdev nvme1n1 00:22:34.437 15:03:07 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:22:34.437 15:03:07 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:34.437 15:03:07 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:34.437 15:03:07 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:22:34.437 15:03:07 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:22:34.437 15:03:07 -- common/autotest_common.sh@10 -- # set +x 00:22:34.437 15:03:07 -- host/discovery_remove_ifc.sh@29 -- # sort 00:22:34.437 15:03:07 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:34.437 15:03:07 -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:22:34.437 15:03:07 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:22:35.373 [2024-12-01 15:03:08.345195] bdev_nvme.c:6759:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:22:35.373 [2024-12-01 15:03:08.345342] bdev_nvme.c:6839:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:22:35.373 [2024-12-01 15:03:08.345372] bdev_nvme.c:6722:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:22:35.373 [2024-12-01 15:03:08.431282] bdev_nvme.c:6688:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme1 00:22:35.373 15:03:08 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:22:35.374 15:03:08 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:35.374 15:03:08 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:35.374 15:03:08 -- common/autotest_common.sh@10 -- # set +x 00:22:35.374 15:03:08 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:22:35.374 15:03:08 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:22:35.374 15:03:08 -- host/discovery_remove_ifc.sh@29 -- # sort 00:22:35.374 15:03:08 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:35.374 [2024-12-01 15:03:08.486274] bdev_nvme.c:7548:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:22:35.374 [2024-12-01 15:03:08.486453] bdev_nvme.c:7548:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:22:35.374 [2024-12-01 15:03:08.486515] bdev_nvme.c:7548:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:22:35.374 [2024-12-01 15:03:08.486620] bdev_nvme.c:6578:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme1 done 00:22:35.374 [2024-12-01 15:03:08.486730] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:22:35.633 [2024-12-01 15:03:08.493776] bdev_nvme.c:1595:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0xc790c0 was disconnected and freed. delete nvme_qpair. 00:22:35.633 15:03:08 -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:22:35.633 15:03:08 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:22:36.570 15:03:09 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:22:36.570 15:03:09 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:36.570 15:03:09 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:22:36.570 15:03:09 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:36.570 15:03:09 -- common/autotest_common.sh@10 -- # set +x 00:22:36.570 15:03:09 -- host/discovery_remove_ifc.sh@29 -- # sort 00:22:36.570 15:03:09 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:22:36.570 15:03:09 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:36.570 15:03:09 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme1n1 != \n\v\m\e\1\n\1 ]] 00:22:36.570 15:03:09 -- host/discovery_remove_ifc.sh@88 -- # trap - SIGINT SIGTERM EXIT 00:22:36.570 15:03:09 -- host/discovery_remove_ifc.sh@90 -- # killprocess 96947 00:22:36.570 15:03:09 -- common/autotest_common.sh@936 -- # '[' -z 96947 ']' 00:22:36.570 15:03:09 -- common/autotest_common.sh@940 -- # kill -0 96947 00:22:36.570 15:03:09 -- common/autotest_common.sh@941 -- # uname 00:22:36.570 15:03:09 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:22:36.570 15:03:09 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 96947 00:22:36.570 killing process with pid 96947 00:22:36.570 15:03:09 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:22:36.570 15:03:09 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:22:36.570 15:03:09 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 96947' 00:22:36.570 15:03:09 -- common/autotest_common.sh@955 -- # kill 96947 00:22:36.570 15:03:09 -- common/autotest_common.sh@960 -- # wait 96947 00:22:36.829 15:03:09 -- host/discovery_remove_ifc.sh@91 -- # nvmftestfini 00:22:36.829 15:03:09 -- nvmf/common.sh@476 -- # nvmfcleanup 00:22:36.829 15:03:09 -- nvmf/common.sh@116 -- # sync 00:22:36.829 15:03:09 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:22:36.829 15:03:09 -- nvmf/common.sh@119 -- # set +e 00:22:36.829 15:03:09 -- nvmf/common.sh@120 -- # for i in {1..20} 00:22:36.829 15:03:09 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:22:36.829 rmmod nvme_tcp 00:22:36.830 rmmod nvme_fabrics 00:22:36.830 rmmod nvme_keyring 00:22:36.830 15:03:09 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:22:36.830 15:03:09 -- nvmf/common.sh@123 -- # set -e 00:22:36.830 15:03:09 -- nvmf/common.sh@124 -- # return 0 00:22:36.830 15:03:09 -- nvmf/common.sh@477 -- # '[' -n 96901 ']' 00:22:36.830 15:03:09 -- nvmf/common.sh@478 -- # killprocess 96901 00:22:36.830 15:03:09 -- common/autotest_common.sh@936 -- # '[' -z 96901 ']' 00:22:36.830 15:03:09 -- common/autotest_common.sh@940 -- # kill -0 96901 00:22:36.830 15:03:09 -- common/autotest_common.sh@941 -- # uname 00:22:36.830 15:03:09 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:22:36.830 15:03:09 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 96901 00:22:36.830 killing process with pid 96901 00:22:36.830 15:03:09 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:22:36.830 15:03:09 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:22:36.830 15:03:09 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 96901' 00:22:36.830 15:03:09 -- common/autotest_common.sh@955 -- # kill 96901 00:22:36.830 15:03:09 -- common/autotest_common.sh@960 -- # wait 96901 00:22:37.088 15:03:10 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:22:37.088 15:03:10 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:22:37.088 15:03:10 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:22:37.088 15:03:10 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:22:37.088 15:03:10 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:22:37.088 15:03:10 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:37.088 15:03:10 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:37.088 15:03:10 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:37.348 15:03:10 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:22:37.348 00:22:37.348 real 0m14.586s 00:22:37.348 user 0m24.763s 00:22:37.348 sys 0m1.611s 00:22:37.348 15:03:10 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:22:37.348 15:03:10 -- common/autotest_common.sh@10 -- # set +x 00:22:37.348 ************************************ 00:22:37.348 END TEST nvmf_discovery_remove_ifc 00:22:37.348 ************************************ 00:22:37.348 15:03:10 -- nvmf/nvmf.sh@106 -- # [[ tcp == \t\c\p ]] 00:22:37.348 15:03:10 -- nvmf/nvmf.sh@107 -- # run_test nvmf_digest /home/vagrant/spdk_repo/spdk/test/nvmf/host/digest.sh --transport=tcp 00:22:37.348 15:03:10 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:22:37.348 15:03:10 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:22:37.348 15:03:10 -- common/autotest_common.sh@10 -- # set +x 00:22:37.348 ************************************ 00:22:37.348 START TEST nvmf_digest 00:22:37.348 ************************************ 00:22:37.348 15:03:10 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/digest.sh --transport=tcp 00:22:37.348 * Looking for test storage... 00:22:37.348 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:22:37.348 15:03:10 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:22:37.348 15:03:10 -- common/autotest_common.sh@1690 -- # lcov --version 00:22:37.348 15:03:10 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:22:37.348 15:03:10 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:22:37.348 15:03:10 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:22:37.348 15:03:10 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:22:37.348 15:03:10 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:22:37.348 15:03:10 -- scripts/common.sh@335 -- # IFS=.-: 00:22:37.348 15:03:10 -- scripts/common.sh@335 -- # read -ra ver1 00:22:37.348 15:03:10 -- scripts/common.sh@336 -- # IFS=.-: 00:22:37.348 15:03:10 -- scripts/common.sh@336 -- # read -ra ver2 00:22:37.348 15:03:10 -- scripts/common.sh@337 -- # local 'op=<' 00:22:37.348 15:03:10 -- scripts/common.sh@339 -- # ver1_l=2 00:22:37.348 15:03:10 -- scripts/common.sh@340 -- # ver2_l=1 00:22:37.348 15:03:10 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:22:37.348 15:03:10 -- scripts/common.sh@343 -- # case "$op" in 00:22:37.348 15:03:10 -- scripts/common.sh@344 -- # : 1 00:22:37.348 15:03:10 -- scripts/common.sh@363 -- # (( v = 0 )) 00:22:37.348 15:03:10 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:37.348 15:03:10 -- scripts/common.sh@364 -- # decimal 1 00:22:37.348 15:03:10 -- scripts/common.sh@352 -- # local d=1 00:22:37.348 15:03:10 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:37.348 15:03:10 -- scripts/common.sh@354 -- # echo 1 00:22:37.348 15:03:10 -- scripts/common.sh@364 -- # ver1[v]=1 00:22:37.348 15:03:10 -- scripts/common.sh@365 -- # decimal 2 00:22:37.348 15:03:10 -- scripts/common.sh@352 -- # local d=2 00:22:37.348 15:03:10 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:22:37.348 15:03:10 -- scripts/common.sh@354 -- # echo 2 00:22:37.348 15:03:10 -- scripts/common.sh@365 -- # ver2[v]=2 00:22:37.348 15:03:10 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:22:37.348 15:03:10 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:22:37.348 15:03:10 -- scripts/common.sh@367 -- # return 0 00:22:37.348 15:03:10 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:37.348 15:03:10 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:22:37.348 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:37.348 --rc genhtml_branch_coverage=1 00:22:37.348 --rc genhtml_function_coverage=1 00:22:37.348 --rc genhtml_legend=1 00:22:37.348 --rc geninfo_all_blocks=1 00:22:37.348 --rc geninfo_unexecuted_blocks=1 00:22:37.348 00:22:37.348 ' 00:22:37.348 15:03:10 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:22:37.348 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:37.348 --rc genhtml_branch_coverage=1 00:22:37.348 --rc genhtml_function_coverage=1 00:22:37.348 --rc genhtml_legend=1 00:22:37.348 --rc geninfo_all_blocks=1 00:22:37.348 --rc geninfo_unexecuted_blocks=1 00:22:37.348 00:22:37.348 ' 00:22:37.348 15:03:10 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:22:37.348 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:37.348 --rc genhtml_branch_coverage=1 00:22:37.348 --rc genhtml_function_coverage=1 00:22:37.348 --rc genhtml_legend=1 00:22:37.348 --rc geninfo_all_blocks=1 00:22:37.348 --rc geninfo_unexecuted_blocks=1 00:22:37.348 00:22:37.348 ' 00:22:37.348 15:03:10 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:22:37.348 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:37.348 --rc genhtml_branch_coverage=1 00:22:37.348 --rc genhtml_function_coverage=1 00:22:37.348 --rc genhtml_legend=1 00:22:37.348 --rc geninfo_all_blocks=1 00:22:37.348 --rc geninfo_unexecuted_blocks=1 00:22:37.348 00:22:37.348 ' 00:22:37.348 15:03:10 -- host/digest.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:22:37.348 15:03:10 -- nvmf/common.sh@7 -- # uname -s 00:22:37.348 15:03:10 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:37.348 15:03:10 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:37.348 15:03:10 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:37.348 15:03:10 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:37.348 15:03:10 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:37.348 15:03:10 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:37.348 15:03:10 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:37.348 15:03:10 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:37.348 15:03:10 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:37.608 15:03:10 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:37.608 15:03:10 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:2d843004-a791-47f3-8dd7-3d04462c368b 00:22:37.608 15:03:10 -- nvmf/common.sh@18 -- # NVME_HOSTID=2d843004-a791-47f3-8dd7-3d04462c368b 00:22:37.608 15:03:10 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:37.608 15:03:10 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:37.608 15:03:10 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:22:37.608 15:03:10 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:22:37.608 15:03:10 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:37.608 15:03:10 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:37.608 15:03:10 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:37.608 15:03:10 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:37.608 15:03:10 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:37.608 15:03:10 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:37.608 15:03:10 -- paths/export.sh@5 -- # export PATH 00:22:37.608 15:03:10 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:37.608 15:03:10 -- nvmf/common.sh@46 -- # : 0 00:22:37.608 15:03:10 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:22:37.608 15:03:10 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:22:37.608 15:03:10 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:22:37.608 15:03:10 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:37.608 15:03:10 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:37.608 15:03:10 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:22:37.608 15:03:10 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:22:37.608 15:03:10 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:22:37.608 15:03:10 -- host/digest.sh@14 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:22:37.608 15:03:10 -- host/digest.sh@15 -- # bperfsock=/var/tmp/bperf.sock 00:22:37.608 15:03:10 -- host/digest.sh@16 -- # runtime=2 00:22:37.608 15:03:10 -- host/digest.sh@130 -- # [[ tcp != \t\c\p ]] 00:22:37.608 15:03:10 -- host/digest.sh@132 -- # nvmftestinit 00:22:37.608 15:03:10 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:22:37.608 15:03:10 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:37.608 15:03:10 -- nvmf/common.sh@436 -- # prepare_net_devs 00:22:37.608 15:03:10 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:22:37.608 15:03:10 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:22:37.608 15:03:10 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:37.608 15:03:10 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:37.608 15:03:10 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:37.608 15:03:10 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:22:37.608 15:03:10 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:22:37.608 15:03:10 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:22:37.608 15:03:10 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:22:37.608 15:03:10 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:22:37.608 15:03:10 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:22:37.608 15:03:10 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:37.608 15:03:10 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:37.608 15:03:10 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:22:37.608 15:03:10 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:22:37.608 15:03:10 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:22:37.608 15:03:10 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:22:37.608 15:03:10 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:22:37.608 15:03:10 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:37.608 15:03:10 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:22:37.608 15:03:10 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:22:37.608 15:03:10 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:22:37.608 15:03:10 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:22:37.608 15:03:10 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:22:37.608 15:03:10 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:22:37.608 Cannot find device "nvmf_tgt_br" 00:22:37.608 15:03:10 -- nvmf/common.sh@154 -- # true 00:22:37.608 15:03:10 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:22:37.608 Cannot find device "nvmf_tgt_br2" 00:22:37.608 15:03:10 -- nvmf/common.sh@155 -- # true 00:22:37.608 15:03:10 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:22:37.608 15:03:10 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:22:37.608 Cannot find device "nvmf_tgt_br" 00:22:37.608 15:03:10 -- nvmf/common.sh@157 -- # true 00:22:37.608 15:03:10 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:22:37.608 Cannot find device "nvmf_tgt_br2" 00:22:37.608 15:03:10 -- nvmf/common.sh@158 -- # true 00:22:37.608 15:03:10 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:22:37.608 15:03:10 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:22:37.608 15:03:10 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:22:37.608 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:22:37.608 15:03:10 -- nvmf/common.sh@161 -- # true 00:22:37.608 15:03:10 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:22:37.608 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:22:37.608 15:03:10 -- nvmf/common.sh@162 -- # true 00:22:37.608 15:03:10 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:22:37.608 15:03:10 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:22:37.608 15:03:10 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:22:37.608 15:03:10 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:22:37.608 15:03:10 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:22:37.608 15:03:10 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:22:37.608 15:03:10 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:22:37.608 15:03:10 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:22:37.608 15:03:10 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:22:37.608 15:03:10 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:22:37.608 15:03:10 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:22:37.608 15:03:10 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:22:37.608 15:03:10 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:22:37.608 15:03:10 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:22:37.867 15:03:10 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:22:37.867 15:03:10 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:22:37.867 15:03:10 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:22:37.867 15:03:10 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:22:37.867 15:03:10 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:22:37.867 15:03:10 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:22:37.867 15:03:10 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:22:37.867 15:03:10 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:22:37.867 15:03:10 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:22:37.868 15:03:10 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:22:37.868 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:37.868 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.103 ms 00:22:37.868 00:22:37.868 --- 10.0.0.2 ping statistics --- 00:22:37.868 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:37.868 rtt min/avg/max/mdev = 0.103/0.103/0.103/0.000 ms 00:22:37.868 15:03:10 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:22:37.868 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:22:37.868 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.059 ms 00:22:37.868 00:22:37.868 --- 10.0.0.3 ping statistics --- 00:22:37.868 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:37.868 rtt min/avg/max/mdev = 0.059/0.059/0.059/0.000 ms 00:22:37.868 15:03:10 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:22:37.868 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:37.868 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.026 ms 00:22:37.868 00:22:37.868 --- 10.0.0.1 ping statistics --- 00:22:37.868 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:37.868 rtt min/avg/max/mdev = 0.026/0.026/0.026/0.000 ms 00:22:37.868 15:03:10 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:37.868 15:03:10 -- nvmf/common.sh@421 -- # return 0 00:22:37.868 15:03:10 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:22:37.868 15:03:10 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:37.868 15:03:10 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:22:37.868 15:03:10 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:22:37.868 15:03:10 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:37.868 15:03:10 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:22:37.868 15:03:10 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:22:37.868 15:03:10 -- host/digest.sh@134 -- # trap cleanup SIGINT SIGTERM EXIT 00:22:37.868 15:03:10 -- host/digest.sh@135 -- # run_test nvmf_digest_clean run_digest 00:22:37.868 15:03:10 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:22:37.868 15:03:10 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:22:37.868 15:03:10 -- common/autotest_common.sh@10 -- # set +x 00:22:37.868 ************************************ 00:22:37.868 START TEST nvmf_digest_clean 00:22:37.868 ************************************ 00:22:37.868 15:03:10 -- common/autotest_common.sh@1114 -- # run_digest 00:22:37.868 15:03:10 -- host/digest.sh@119 -- # nvmfappstart --wait-for-rpc 00:22:37.868 15:03:10 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:22:37.868 15:03:10 -- common/autotest_common.sh@722 -- # xtrace_disable 00:22:37.868 15:03:10 -- common/autotest_common.sh@10 -- # set +x 00:22:37.868 15:03:10 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:22:37.868 15:03:10 -- nvmf/common.sh@469 -- # nvmfpid=97373 00:22:37.868 15:03:10 -- nvmf/common.sh@470 -- # waitforlisten 97373 00:22:37.868 15:03:10 -- common/autotest_common.sh@829 -- # '[' -z 97373 ']' 00:22:37.868 15:03:10 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:37.868 15:03:10 -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:37.868 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:37.868 15:03:10 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:37.868 15:03:10 -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:37.868 15:03:10 -- common/autotest_common.sh@10 -- # set +x 00:22:37.868 [2024-12-01 15:03:10.895939] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:22:37.868 [2024-12-01 15:03:10.896021] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:38.126 [2024-12-01 15:03:11.038890] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:38.126 [2024-12-01 15:03:11.107872] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:22:38.126 [2024-12-01 15:03:11.108057] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:38.126 [2024-12-01 15:03:11.108074] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:38.126 [2024-12-01 15:03:11.108085] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:38.126 [2024-12-01 15:03:11.108120] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:22:38.126 15:03:11 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:38.126 15:03:11 -- common/autotest_common.sh@862 -- # return 0 00:22:38.126 15:03:11 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:22:38.126 15:03:11 -- common/autotest_common.sh@728 -- # xtrace_disable 00:22:38.126 15:03:11 -- common/autotest_common.sh@10 -- # set +x 00:22:38.126 15:03:11 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:38.126 15:03:11 -- host/digest.sh@120 -- # common_target_config 00:22:38.126 15:03:11 -- host/digest.sh@43 -- # rpc_cmd 00:22:38.126 15:03:11 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:38.126 15:03:11 -- common/autotest_common.sh@10 -- # set +x 00:22:38.384 null0 00:22:38.384 [2024-12-01 15:03:11.306516] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:38.384 [2024-12-01 15:03:11.330687] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:38.384 15:03:11 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:38.384 15:03:11 -- host/digest.sh@122 -- # run_bperf randread 4096 128 00:22:38.384 15:03:11 -- host/digest.sh@77 -- # local rw bs qd 00:22:38.384 15:03:11 -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:22:38.384 15:03:11 -- host/digest.sh@80 -- # rw=randread 00:22:38.384 15:03:11 -- host/digest.sh@80 -- # bs=4096 00:22:38.384 15:03:11 -- host/digest.sh@80 -- # qd=128 00:22:38.384 15:03:11 -- host/digest.sh@82 -- # bperfpid=97404 00:22:38.384 15:03:11 -- host/digest.sh@81 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:22:38.384 15:03:11 -- host/digest.sh@83 -- # waitforlisten 97404 /var/tmp/bperf.sock 00:22:38.385 15:03:11 -- common/autotest_common.sh@829 -- # '[' -z 97404 ']' 00:22:38.385 15:03:11 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:22:38.385 15:03:11 -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:38.385 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:22:38.385 15:03:11 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:22:38.385 15:03:11 -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:38.385 15:03:11 -- common/autotest_common.sh@10 -- # set +x 00:22:38.385 [2024-12-01 15:03:11.393230] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:22:38.385 [2024-12-01 15:03:11.393328] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid97404 ] 00:22:38.643 [2024-12-01 15:03:11.527374] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:38.643 [2024-12-01 15:03:11.613981] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:22:39.579 15:03:12 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:39.579 15:03:12 -- common/autotest_common.sh@862 -- # return 0 00:22:39.579 15:03:12 -- host/digest.sh@85 -- # [[ 0 -eq 1 ]] 00:22:39.579 15:03:12 -- host/digest.sh@86 -- # bperf_rpc framework_start_init 00:22:39.579 15:03:12 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:22:39.838 15:03:12 -- host/digest.sh@88 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:22:39.838 15:03:12 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:22:40.096 nvme0n1 00:22:40.096 15:03:13 -- host/digest.sh@91 -- # bperf_py perform_tests 00:22:40.096 15:03:13 -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:22:40.096 Running I/O for 2 seconds... 00:22:41.999 00:22:41.999 Latency(us) 00:22:41.999 [2024-12-01T15:03:15.114Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:41.999 [2024-12-01T15:03:15.114Z] Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:22:41.999 nvme0n1 : 2.00 22114.98 86.39 0.00 0.00 5783.46 2487.39 12749.73 00:22:41.999 [2024-12-01T15:03:15.114Z] =================================================================================================================== 00:22:41.999 [2024-12-01T15:03:15.114Z] Total : 22114.98 86.39 0.00 0.00 5783.46 2487.39 12749.73 00:22:41.999 0 00:22:42.257 15:03:15 -- host/digest.sh@92 -- # read -r acc_module acc_executed 00:22:42.257 15:03:15 -- host/digest.sh@92 -- # get_accel_stats 00:22:42.257 15:03:15 -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:22:42.257 15:03:15 -- host/digest.sh@37 -- # jq -rc '.operations[] 00:22:42.257 | select(.opcode=="crc32c") 00:22:42.257 | "\(.module_name) \(.executed)"' 00:22:42.257 15:03:15 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:22:42.514 15:03:15 -- host/digest.sh@93 -- # [[ 0 -eq 1 ]] 00:22:42.514 15:03:15 -- host/digest.sh@93 -- # exp_module=software 00:22:42.514 15:03:15 -- host/digest.sh@94 -- # (( acc_executed > 0 )) 00:22:42.514 15:03:15 -- host/digest.sh@95 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:22:42.514 15:03:15 -- host/digest.sh@97 -- # killprocess 97404 00:22:42.514 15:03:15 -- common/autotest_common.sh@936 -- # '[' -z 97404 ']' 00:22:42.514 15:03:15 -- common/autotest_common.sh@940 -- # kill -0 97404 00:22:42.514 15:03:15 -- common/autotest_common.sh@941 -- # uname 00:22:42.514 15:03:15 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:22:42.514 15:03:15 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 97404 00:22:42.514 killing process with pid 97404 00:22:42.514 Received shutdown signal, test time was about 2.000000 seconds 00:22:42.514 00:22:42.515 Latency(us) 00:22:42.515 [2024-12-01T15:03:15.630Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:42.515 [2024-12-01T15:03:15.630Z] =================================================================================================================== 00:22:42.515 [2024-12-01T15:03:15.630Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:22:42.515 15:03:15 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:22:42.515 15:03:15 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:22:42.515 15:03:15 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 97404' 00:22:42.515 15:03:15 -- common/autotest_common.sh@955 -- # kill 97404 00:22:42.515 15:03:15 -- common/autotest_common.sh@960 -- # wait 97404 00:22:42.779 15:03:15 -- host/digest.sh@123 -- # run_bperf randread 131072 16 00:22:42.779 15:03:15 -- host/digest.sh@77 -- # local rw bs qd 00:22:42.779 15:03:15 -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:22:42.779 15:03:15 -- host/digest.sh@80 -- # rw=randread 00:22:42.779 15:03:15 -- host/digest.sh@80 -- # bs=131072 00:22:42.779 15:03:15 -- host/digest.sh@80 -- # qd=16 00:22:42.779 15:03:15 -- host/digest.sh@81 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:22:42.779 15:03:15 -- host/digest.sh@82 -- # bperfpid=97500 00:22:42.779 15:03:15 -- host/digest.sh@83 -- # waitforlisten 97500 /var/tmp/bperf.sock 00:22:42.779 15:03:15 -- common/autotest_common.sh@829 -- # '[' -z 97500 ']' 00:22:42.779 15:03:15 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:22:42.780 15:03:15 -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:42.780 15:03:15 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:22:42.780 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:22:42.780 15:03:15 -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:42.780 15:03:15 -- common/autotest_common.sh@10 -- # set +x 00:22:42.780 [2024-12-01 15:03:15.710784] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:22:42.780 [2024-12-01 15:03:15.711037] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid97500 ] 00:22:42.780 I/O size of 131072 is greater than zero copy threshold (65536). 00:22:42.780 Zero copy mechanism will not be used. 00:22:42.780 [2024-12-01 15:03:15.844439] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:43.038 [2024-12-01 15:03:15.926232] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:22:43.038 15:03:15 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:43.038 15:03:15 -- common/autotest_common.sh@862 -- # return 0 00:22:43.038 15:03:15 -- host/digest.sh@85 -- # [[ 0 -eq 1 ]] 00:22:43.038 15:03:15 -- host/digest.sh@86 -- # bperf_rpc framework_start_init 00:22:43.038 15:03:15 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:22:43.297 15:03:16 -- host/digest.sh@88 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:22:43.297 15:03:16 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:22:43.555 nvme0n1 00:22:43.555 15:03:16 -- host/digest.sh@91 -- # bperf_py perform_tests 00:22:43.555 15:03:16 -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:22:43.814 I/O size of 131072 is greater than zero copy threshold (65536). 00:22:43.814 Zero copy mechanism will not be used. 00:22:43.814 Running I/O for 2 seconds... 00:22:45.716 00:22:45.716 Latency(us) 00:22:45.716 [2024-12-01T15:03:18.831Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:45.716 [2024-12-01T15:03:18.831Z] Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:22:45.716 nvme0n1 : 2.00 7975.72 996.97 0.00 0.00 2003.35 525.03 4140.68 00:22:45.716 [2024-12-01T15:03:18.831Z] =================================================================================================================== 00:22:45.716 [2024-12-01T15:03:18.831Z] Total : 7975.72 996.97 0.00 0.00 2003.35 525.03 4140.68 00:22:45.716 0 00:22:45.716 15:03:18 -- host/digest.sh@92 -- # read -r acc_module acc_executed 00:22:45.716 15:03:18 -- host/digest.sh@92 -- # get_accel_stats 00:22:45.716 15:03:18 -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:22:45.716 15:03:18 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:22:45.716 15:03:18 -- host/digest.sh@37 -- # jq -rc '.operations[] 00:22:45.716 | select(.opcode=="crc32c") 00:22:45.716 | "\(.module_name) \(.executed)"' 00:22:45.975 15:03:19 -- host/digest.sh@93 -- # [[ 0 -eq 1 ]] 00:22:45.975 15:03:19 -- host/digest.sh@93 -- # exp_module=software 00:22:45.975 15:03:19 -- host/digest.sh@94 -- # (( acc_executed > 0 )) 00:22:45.975 15:03:19 -- host/digest.sh@95 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:22:45.975 15:03:19 -- host/digest.sh@97 -- # killprocess 97500 00:22:45.975 15:03:19 -- common/autotest_common.sh@936 -- # '[' -z 97500 ']' 00:22:45.975 15:03:19 -- common/autotest_common.sh@940 -- # kill -0 97500 00:22:45.975 15:03:19 -- common/autotest_common.sh@941 -- # uname 00:22:45.975 15:03:19 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:22:45.975 15:03:19 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 97500 00:22:45.975 15:03:19 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:22:45.975 killing process with pid 97500 00:22:45.975 Received shutdown signal, test time was about 2.000000 seconds 00:22:45.975 00:22:45.975 Latency(us) 00:22:45.975 [2024-12-01T15:03:19.090Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:45.975 [2024-12-01T15:03:19.090Z] =================================================================================================================== 00:22:45.975 [2024-12-01T15:03:19.090Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:22:45.975 15:03:19 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:22:45.975 15:03:19 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 97500' 00:22:45.975 15:03:19 -- common/autotest_common.sh@955 -- # kill 97500 00:22:45.975 15:03:19 -- common/autotest_common.sh@960 -- # wait 97500 00:22:46.234 15:03:19 -- host/digest.sh@124 -- # run_bperf randwrite 4096 128 00:22:46.234 15:03:19 -- host/digest.sh@77 -- # local rw bs qd 00:22:46.234 15:03:19 -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:22:46.234 15:03:19 -- host/digest.sh@80 -- # rw=randwrite 00:22:46.234 15:03:19 -- host/digest.sh@80 -- # bs=4096 00:22:46.234 15:03:19 -- host/digest.sh@80 -- # qd=128 00:22:46.234 15:03:19 -- host/digest.sh@82 -- # bperfpid=97571 00:22:46.234 15:03:19 -- host/digest.sh@81 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:22:46.234 15:03:19 -- host/digest.sh@83 -- # waitforlisten 97571 /var/tmp/bperf.sock 00:22:46.234 15:03:19 -- common/autotest_common.sh@829 -- # '[' -z 97571 ']' 00:22:46.234 15:03:19 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:22:46.234 15:03:19 -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:46.234 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:22:46.234 15:03:19 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:22:46.234 15:03:19 -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:46.234 15:03:19 -- common/autotest_common.sh@10 -- # set +x 00:22:46.493 [2024-12-01 15:03:19.359991] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:22:46.493 [2024-12-01 15:03:19.360056] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid97571 ] 00:22:46.493 [2024-12-01 15:03:19.490769] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:46.493 [2024-12-01 15:03:19.578841] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:22:47.429 15:03:20 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:47.429 15:03:20 -- common/autotest_common.sh@862 -- # return 0 00:22:47.429 15:03:20 -- host/digest.sh@85 -- # [[ 0 -eq 1 ]] 00:22:47.429 15:03:20 -- host/digest.sh@86 -- # bperf_rpc framework_start_init 00:22:47.429 15:03:20 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:22:47.688 15:03:20 -- host/digest.sh@88 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:22:47.688 15:03:20 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:22:47.946 nvme0n1 00:22:47.946 15:03:20 -- host/digest.sh@91 -- # bperf_py perform_tests 00:22:47.946 15:03:20 -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:22:47.946 Running I/O for 2 seconds... 00:22:50.480 00:22:50.480 Latency(us) 00:22:50.480 [2024-12-01T15:03:23.595Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:50.480 [2024-12-01T15:03:23.595Z] Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:22:50.480 nvme0n1 : 2.01 28425.76 111.04 0.00 0.00 4498.77 1906.50 8043.05 00:22:50.480 [2024-12-01T15:03:23.595Z] =================================================================================================================== 00:22:50.480 [2024-12-01T15:03:23.595Z] Total : 28425.76 111.04 0.00 0.00 4498.77 1906.50 8043.05 00:22:50.480 0 00:22:50.480 15:03:23 -- host/digest.sh@92 -- # read -r acc_module acc_executed 00:22:50.480 15:03:23 -- host/digest.sh@92 -- # get_accel_stats 00:22:50.480 15:03:23 -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:22:50.480 15:03:23 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:22:50.480 15:03:23 -- host/digest.sh@37 -- # jq -rc '.operations[] 00:22:50.480 | select(.opcode=="crc32c") 00:22:50.480 | "\(.module_name) \(.executed)"' 00:22:50.480 15:03:23 -- host/digest.sh@93 -- # [[ 0 -eq 1 ]] 00:22:50.480 15:03:23 -- host/digest.sh@93 -- # exp_module=software 00:22:50.480 15:03:23 -- host/digest.sh@94 -- # (( acc_executed > 0 )) 00:22:50.480 15:03:23 -- host/digest.sh@95 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:22:50.480 15:03:23 -- host/digest.sh@97 -- # killprocess 97571 00:22:50.480 15:03:23 -- common/autotest_common.sh@936 -- # '[' -z 97571 ']' 00:22:50.480 15:03:23 -- common/autotest_common.sh@940 -- # kill -0 97571 00:22:50.480 15:03:23 -- common/autotest_common.sh@941 -- # uname 00:22:50.480 15:03:23 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:22:50.480 15:03:23 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 97571 00:22:50.480 15:03:23 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:22:50.480 killing process with pid 97571 00:22:50.480 15:03:23 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:22:50.480 15:03:23 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 97571' 00:22:50.480 Received shutdown signal, test time was about 2.000000 seconds 00:22:50.480 00:22:50.480 Latency(us) 00:22:50.480 [2024-12-01T15:03:23.595Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:50.480 [2024-12-01T15:03:23.595Z] =================================================================================================================== 00:22:50.480 [2024-12-01T15:03:23.595Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:22:50.480 15:03:23 -- common/autotest_common.sh@955 -- # kill 97571 00:22:50.480 15:03:23 -- common/autotest_common.sh@960 -- # wait 97571 00:22:50.480 15:03:23 -- host/digest.sh@125 -- # run_bperf randwrite 131072 16 00:22:50.480 15:03:23 -- host/digest.sh@77 -- # local rw bs qd 00:22:50.480 15:03:23 -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:22:50.480 15:03:23 -- host/digest.sh@80 -- # rw=randwrite 00:22:50.480 15:03:23 -- host/digest.sh@80 -- # bs=131072 00:22:50.480 15:03:23 -- host/digest.sh@80 -- # qd=16 00:22:50.480 15:03:23 -- host/digest.sh@82 -- # bperfpid=97661 00:22:50.480 15:03:23 -- host/digest.sh@81 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:22:50.480 15:03:23 -- host/digest.sh@83 -- # waitforlisten 97661 /var/tmp/bperf.sock 00:22:50.480 15:03:23 -- common/autotest_common.sh@829 -- # '[' -z 97661 ']' 00:22:50.480 15:03:23 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:22:50.480 15:03:23 -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:50.480 15:03:23 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:22:50.480 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:22:50.480 15:03:23 -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:50.480 15:03:23 -- common/autotest_common.sh@10 -- # set +x 00:22:50.739 [2024-12-01 15:03:23.633624] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:22:50.739 [2024-12-01 15:03:23.633935] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid97661 ] 00:22:50.739 I/O size of 131072 is greater than zero copy threshold (65536). 00:22:50.739 Zero copy mechanism will not be used. 00:22:50.739 [2024-12-01 15:03:23.774499] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:50.739 [2024-12-01 15:03:23.846778] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:22:51.688 15:03:24 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:51.688 15:03:24 -- common/autotest_common.sh@862 -- # return 0 00:22:51.688 15:03:24 -- host/digest.sh@85 -- # [[ 0 -eq 1 ]] 00:22:51.688 15:03:24 -- host/digest.sh@86 -- # bperf_rpc framework_start_init 00:22:51.688 15:03:24 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:22:51.967 15:03:24 -- host/digest.sh@88 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:22:51.967 15:03:24 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:22:52.241 nvme0n1 00:22:52.241 15:03:25 -- host/digest.sh@91 -- # bperf_py perform_tests 00:22:52.241 15:03:25 -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:22:52.241 I/O size of 131072 is greater than zero copy threshold (65536). 00:22:52.241 Zero copy mechanism will not be used. 00:22:52.241 Running I/O for 2 seconds... 00:22:54.774 00:22:54.774 Latency(us) 00:22:54.774 [2024-12-01T15:03:27.889Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:54.774 [2024-12-01T15:03:27.889Z] Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:22:54.774 nvme0n1 : 2.00 8209.93 1026.24 0.00 0.00 1943.83 1586.27 7119.59 00:22:54.774 [2024-12-01T15:03:27.889Z] =================================================================================================================== 00:22:54.774 [2024-12-01T15:03:27.889Z] Total : 8209.93 1026.24 0.00 0.00 1943.83 1586.27 7119.59 00:22:54.774 0 00:22:54.774 15:03:27 -- host/digest.sh@92 -- # read -r acc_module acc_executed 00:22:54.774 15:03:27 -- host/digest.sh@92 -- # get_accel_stats 00:22:54.774 15:03:27 -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:22:54.774 15:03:27 -- host/digest.sh@37 -- # jq -rc '.operations[] 00:22:54.774 | select(.opcode=="crc32c") 00:22:54.774 | "\(.module_name) \(.executed)"' 00:22:54.774 15:03:27 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:22:54.774 15:03:27 -- host/digest.sh@93 -- # [[ 0 -eq 1 ]] 00:22:54.774 15:03:27 -- host/digest.sh@93 -- # exp_module=software 00:22:54.774 15:03:27 -- host/digest.sh@94 -- # (( acc_executed > 0 )) 00:22:54.774 15:03:27 -- host/digest.sh@95 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:22:54.774 15:03:27 -- host/digest.sh@97 -- # killprocess 97661 00:22:54.774 15:03:27 -- common/autotest_common.sh@936 -- # '[' -z 97661 ']' 00:22:54.774 15:03:27 -- common/autotest_common.sh@940 -- # kill -0 97661 00:22:54.774 15:03:27 -- common/autotest_common.sh@941 -- # uname 00:22:54.774 15:03:27 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:22:54.774 15:03:27 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 97661 00:22:54.774 15:03:27 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:22:54.774 15:03:27 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:22:54.774 killing process with pid 97661 00:22:54.774 15:03:27 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 97661' 00:22:54.774 15:03:27 -- common/autotest_common.sh@955 -- # kill 97661 00:22:54.774 Received shutdown signal, test time was about 2.000000 seconds 00:22:54.774 00:22:54.774 Latency(us) 00:22:54.774 [2024-12-01T15:03:27.889Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:54.774 [2024-12-01T15:03:27.889Z] =================================================================================================================== 00:22:54.774 [2024-12-01T15:03:27.889Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:22:54.774 15:03:27 -- common/autotest_common.sh@960 -- # wait 97661 00:22:54.774 15:03:27 -- host/digest.sh@126 -- # killprocess 97373 00:22:54.774 15:03:27 -- common/autotest_common.sh@936 -- # '[' -z 97373 ']' 00:22:54.774 15:03:27 -- common/autotest_common.sh@940 -- # kill -0 97373 00:22:54.774 15:03:27 -- common/autotest_common.sh@941 -- # uname 00:22:54.774 15:03:27 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:22:54.774 15:03:27 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 97373 00:22:55.033 15:03:27 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:22:55.033 15:03:27 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:22:55.033 killing process with pid 97373 00:22:55.033 15:03:27 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 97373' 00:22:55.033 15:03:27 -- common/autotest_common.sh@955 -- # kill 97373 00:22:55.033 15:03:27 -- common/autotest_common.sh@960 -- # wait 97373 00:22:55.033 00:22:55.033 real 0m17.238s 00:22:55.033 user 0m31.534s 00:22:55.033 sys 0m5.655s 00:22:55.033 15:03:28 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:22:55.033 15:03:28 -- common/autotest_common.sh@10 -- # set +x 00:22:55.033 ************************************ 00:22:55.033 END TEST nvmf_digest_clean 00:22:55.033 ************************************ 00:22:55.033 15:03:28 -- host/digest.sh@136 -- # run_test nvmf_digest_error run_digest_error 00:22:55.033 15:03:28 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:22:55.033 15:03:28 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:22:55.033 15:03:28 -- common/autotest_common.sh@10 -- # set +x 00:22:55.033 ************************************ 00:22:55.033 START TEST nvmf_digest_error 00:22:55.033 ************************************ 00:22:55.033 15:03:28 -- common/autotest_common.sh@1114 -- # run_digest_error 00:22:55.033 15:03:28 -- host/digest.sh@101 -- # nvmfappstart --wait-for-rpc 00:22:55.033 15:03:28 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:22:55.033 15:03:28 -- common/autotest_common.sh@722 -- # xtrace_disable 00:22:55.033 15:03:28 -- common/autotest_common.sh@10 -- # set +x 00:22:55.033 15:03:28 -- nvmf/common.sh@469 -- # nvmfpid=97776 00:22:55.033 15:03:28 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:22:55.033 15:03:28 -- nvmf/common.sh@470 -- # waitforlisten 97776 00:22:55.033 15:03:28 -- common/autotest_common.sh@829 -- # '[' -z 97776 ']' 00:22:55.033 15:03:28 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:55.033 15:03:28 -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:55.033 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:55.033 15:03:28 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:55.033 15:03:28 -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:55.033 15:03:28 -- common/autotest_common.sh@10 -- # set +x 00:22:55.293 [2024-12-01 15:03:28.177398] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:22:55.293 [2024-12-01 15:03:28.177494] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:55.293 [2024-12-01 15:03:28.308668] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:55.293 [2024-12-01 15:03:28.357223] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:22:55.293 [2024-12-01 15:03:28.357355] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:55.293 [2024-12-01 15:03:28.357366] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:55.293 [2024-12-01 15:03:28.357375] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:55.293 [2024-12-01 15:03:28.357404] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:22:55.551 15:03:28 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:55.551 15:03:28 -- common/autotest_common.sh@862 -- # return 0 00:22:55.551 15:03:28 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:22:55.551 15:03:28 -- common/autotest_common.sh@728 -- # xtrace_disable 00:22:55.551 15:03:28 -- common/autotest_common.sh@10 -- # set +x 00:22:55.551 15:03:28 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:55.551 15:03:28 -- host/digest.sh@103 -- # rpc_cmd accel_assign_opc -o crc32c -m error 00:22:55.551 15:03:28 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:55.551 15:03:28 -- common/autotest_common.sh@10 -- # set +x 00:22:55.551 [2024-12-01 15:03:28.477856] accel_rpc.c: 168:rpc_accel_assign_opc: *NOTICE*: Operation crc32c will be assigned to module error 00:22:55.552 15:03:28 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:55.552 15:03:28 -- host/digest.sh@104 -- # common_target_config 00:22:55.552 15:03:28 -- host/digest.sh@43 -- # rpc_cmd 00:22:55.552 15:03:28 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:55.552 15:03:28 -- common/autotest_common.sh@10 -- # set +x 00:22:55.552 null0 00:22:55.552 [2024-12-01 15:03:28.597315] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:55.552 [2024-12-01 15:03:28.621523] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:55.552 15:03:28 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:55.552 15:03:28 -- host/digest.sh@107 -- # run_bperf_err randread 4096 128 00:22:55.552 15:03:28 -- host/digest.sh@54 -- # local rw bs qd 00:22:55.552 15:03:28 -- host/digest.sh@56 -- # rw=randread 00:22:55.552 15:03:28 -- host/digest.sh@56 -- # bs=4096 00:22:55.552 15:03:28 -- host/digest.sh@56 -- # qd=128 00:22:55.552 15:03:28 -- host/digest.sh@58 -- # bperfpid=97806 00:22:55.552 15:03:28 -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z 00:22:55.552 15:03:28 -- host/digest.sh@60 -- # waitforlisten 97806 /var/tmp/bperf.sock 00:22:55.552 15:03:28 -- common/autotest_common.sh@829 -- # '[' -z 97806 ']' 00:22:55.552 15:03:28 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:22:55.552 15:03:28 -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:55.552 15:03:28 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:22:55.552 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:22:55.552 15:03:28 -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:55.552 15:03:28 -- common/autotest_common.sh@10 -- # set +x 00:22:55.810 [2024-12-01 15:03:28.683866] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:22:55.810 [2024-12-01 15:03:28.683973] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid97806 ] 00:22:55.810 [2024-12-01 15:03:28.822159] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:55.810 [2024-12-01 15:03:28.903826] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:22:56.836 15:03:29 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:56.836 15:03:29 -- common/autotest_common.sh@862 -- # return 0 00:22:56.836 15:03:29 -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:22:56.836 15:03:29 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:22:56.836 15:03:29 -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:22:56.836 15:03:29 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:56.836 15:03:29 -- common/autotest_common.sh@10 -- # set +x 00:22:56.836 15:03:29 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:56.836 15:03:29 -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:22:56.836 15:03:29 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:22:57.095 nvme0n1 00:22:57.095 15:03:30 -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:22:57.095 15:03:30 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:57.095 15:03:30 -- common/autotest_common.sh@10 -- # set +x 00:22:57.095 15:03:30 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:57.095 15:03:30 -- host/digest.sh@69 -- # bperf_py perform_tests 00:22:57.095 15:03:30 -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:22:57.354 Running I/O for 2 seconds... 00:22:57.354 [2024-12-01 15:03:30.263053] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x25318d0) 00:22:57.354 [2024-12-01 15:03:30.263105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:22627 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.354 [2024-12-01 15:03:30.263117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:57.354 [2024-12-01 15:03:30.275298] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x25318d0) 00:22:57.354 [2024-12-01 15:03:30.275329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:24833 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.354 [2024-12-01 15:03:30.275340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:57.354 [2024-12-01 15:03:30.288800] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x25318d0) 00:22:57.354 [2024-12-01 15:03:30.288829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:8493 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.354 [2024-12-01 15:03:30.288840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:57.354 [2024-12-01 15:03:30.300583] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x25318d0) 00:22:57.354 [2024-12-01 15:03:30.300613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:23837 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.354 [2024-12-01 15:03:30.300625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:57.354 [2024-12-01 15:03:30.311148] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x25318d0) 00:22:57.354 [2024-12-01 15:03:30.311177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:5011 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.354 [2024-12-01 15:03:30.311189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:57.354 [2024-12-01 15:03:30.323800] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x25318d0) 00:22:57.354 [2024-12-01 15:03:30.323828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:20213 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.354 [2024-12-01 15:03:30.323839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:57.354 [2024-12-01 15:03:30.335624] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x25318d0) 00:22:57.354 [2024-12-01 15:03:30.335654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:17549 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.354 [2024-12-01 15:03:30.335665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:57.354 [2024-12-01 15:03:30.345811] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x25318d0) 00:22:57.354 [2024-12-01 15:03:30.345839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:14949 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.354 [2024-12-01 15:03:30.345850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:57.354 [2024-12-01 15:03:30.353952] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x25318d0) 00:22:57.354 [2024-12-01 15:03:30.353982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:5883 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.354 [2024-12-01 15:03:30.353992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:57.354 [2024-12-01 15:03:30.364984] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x25318d0) 00:22:57.354 [2024-12-01 15:03:30.365012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:2382 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.354 [2024-12-01 15:03:30.365024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:57.354 [2024-12-01 15:03:30.376227] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x25318d0) 00:22:57.354 [2024-12-01 15:03:30.376257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:23190 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.354 [2024-12-01 15:03:30.376268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:57.354 [2024-12-01 15:03:30.384743] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x25318d0) 00:22:57.354 [2024-12-01 15:03:30.384782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:15874 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.354 [2024-12-01 15:03:30.384794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:57.354 [2024-12-01 15:03:30.396810] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x25318d0) 00:22:57.354 [2024-12-01 15:03:30.396851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:23016 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.354 [2024-12-01 15:03:30.396862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:57.354 [2024-12-01 15:03:30.406441] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x25318d0) 00:22:57.354 [2024-12-01 15:03:30.406470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:5304 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.354 [2024-12-01 15:03:30.406482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:57.354 [2024-12-01 15:03:30.418654] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x25318d0) 00:22:57.354 [2024-12-01 15:03:30.418684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:18223 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.354 [2024-12-01 15:03:30.418695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:57.354 [2024-12-01 15:03:30.431085] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x25318d0) 00:22:57.354 [2024-12-01 15:03:30.431116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:4090 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.354 [2024-12-01 15:03:30.431127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:57.354 [2024-12-01 15:03:30.442631] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x25318d0) 00:22:57.354 [2024-12-01 15:03:30.442661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:20570 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.354 [2024-12-01 15:03:30.442671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:57.354 [2024-12-01 15:03:30.454201] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x25318d0) 00:22:57.354 [2024-12-01 15:03:30.454230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:17 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.354 [2024-12-01 15:03:30.454242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:57.354 [2024-12-01 15:03:30.462815] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x25318d0) 00:22:57.354 [2024-12-01 15:03:30.462843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:11759 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.354 [2024-12-01 15:03:30.462854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:57.613 [2024-12-01 15:03:30.474922] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x25318d0) 00:22:57.613 [2024-12-01 15:03:30.474952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:20860 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.613 [2024-12-01 15:03:30.474962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:57.613 [2024-12-01 15:03:30.487663] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x25318d0) 00:22:57.613 [2024-12-01 15:03:30.487694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:1282 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.613 [2024-12-01 15:03:30.487705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:57.613 [2024-12-01 15:03:30.499503] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x25318d0) 00:22:57.613 [2024-12-01 15:03:30.499532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:19040 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.613 [2024-12-01 15:03:30.499543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:57.613 [2024-12-01 15:03:30.512514] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x25318d0) 00:22:57.613 [2024-12-01 15:03:30.512543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:21534 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.613 [2024-12-01 15:03:30.512554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:57.613 [2024-12-01 15:03:30.521366] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x25318d0) 00:22:57.613 [2024-12-01 15:03:30.521395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:8944 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.613 [2024-12-01 15:03:30.521406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:57.613 [2024-12-01 15:03:30.531678] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x25318d0) 00:22:57.613 [2024-12-01 15:03:30.531708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:14141 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.613 [2024-12-01 15:03:30.531719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:57.613 [2024-12-01 15:03:30.541822] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x25318d0) 00:22:57.613 [2024-12-01 15:03:30.541852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:11492 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.613 [2024-12-01 15:03:30.541862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:57.613 [2024-12-01 15:03:30.551263] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x25318d0) 00:22:57.613 [2024-12-01 15:03:30.551293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:9907 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.613 [2024-12-01 15:03:30.551303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:57.613 [2024-12-01 15:03:30.561075] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x25318d0) 00:22:57.613 [2024-12-01 15:03:30.561104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:9704 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.613 [2024-12-01 15:03:30.561115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:57.613 [2024-12-01 15:03:30.571562] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x25318d0) 00:22:57.613 [2024-12-01 15:03:30.571592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:18717 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.613 [2024-12-01 15:03:30.571602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:57.613 [2024-12-01 15:03:30.580073] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x25318d0) 00:22:57.613 [2024-12-01 15:03:30.580112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:11527 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.613 [2024-12-01 15:03:30.580123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:57.613 [2024-12-01 15:03:30.591540] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x25318d0) 00:22:57.613 [2024-12-01 15:03:30.591570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:23638 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.613 [2024-12-01 15:03:30.591580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:57.613 [2024-12-01 15:03:30.602741] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x25318d0) 00:22:57.613 [2024-12-01 15:03:30.602778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:1830 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.613 [2024-12-01 15:03:30.602789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:57.613 [2024-12-01 15:03:30.614588] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x25318d0) 00:22:57.613 [2024-12-01 15:03:30.614617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:1378 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.613 [2024-12-01 15:03:30.614628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:57.613 [2024-12-01 15:03:30.626832] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x25318d0) 00:22:57.613 [2024-12-01 15:03:30.626861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:16998 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.613 [2024-12-01 15:03:30.626872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:57.613 [2024-12-01 15:03:30.636139] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x25318d0) 00:22:57.613 [2024-12-01 15:03:30.636169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:862 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.613 [2024-12-01 15:03:30.636181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:57.613 [2024-12-01 15:03:30.645748] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x25318d0) 00:22:57.613 [2024-12-01 15:03:30.645787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:9375 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.613 [2024-12-01 15:03:30.645797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:57.613 [2024-12-01 15:03:30.655552] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x25318d0) 00:22:57.613 [2024-12-01 15:03:30.655581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:20856 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.613 [2024-12-01 15:03:30.655592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:57.613 [2024-12-01 15:03:30.664923] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x25318d0) 00:22:57.613 [2024-12-01 15:03:30.664952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:7371 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.613 [2024-12-01 15:03:30.664963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:57.613 [2024-12-01 15:03:30.674327] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x25318d0) 00:22:57.613 [2024-12-01 15:03:30.674356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:6099 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.613 [2024-12-01 15:03:30.674366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:57.613 [2024-12-01 15:03:30.685325] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x25318d0) 00:22:57.613 [2024-12-01 15:03:30.685354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:19319 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.614 [2024-12-01 15:03:30.685365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:57.614 [2024-12-01 15:03:30.696976] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x25318d0) 00:22:57.614 [2024-12-01 15:03:30.697005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:14209 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.614 [2024-12-01 15:03:30.697016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:57.614 [2024-12-01 15:03:30.705723] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x25318d0) 00:22:57.614 [2024-12-01 15:03:30.705805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:4658 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.614 [2024-12-01 15:03:30.705817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:57.614 [2024-12-01 15:03:30.716236] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x25318d0) 00:22:57.614 [2024-12-01 15:03:30.716265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22893 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.614 [2024-12-01 15:03:30.716276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:57.873 [2024-12-01 15:03:30.728083] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x25318d0) 00:22:57.873 [2024-12-01 15:03:30.728126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:18943 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.873 [2024-12-01 15:03:30.728138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:57.873 [2024-12-01 15:03:30.737918] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x25318d0) 00:22:57.873 [2024-12-01 15:03:30.737950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:5196 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.873 [2024-12-01 15:03:30.737961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:57.873 [2024-12-01 15:03:30.748272] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x25318d0) 00:22:57.873 [2024-12-01 15:03:30.748302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:3269 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.873 [2024-12-01 15:03:30.748314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:57.873 [2024-12-01 15:03:30.758746] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x25318d0) 00:22:57.873 [2024-12-01 15:03:30.758785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:22930 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.873 [2024-12-01 15:03:30.758796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:57.873 [2024-12-01 15:03:30.768557] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x25318d0) 00:22:57.873 [2024-12-01 15:03:30.768586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8637 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.873 [2024-12-01 15:03:30.768597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:57.873 [2024-12-01 15:03:30.780112] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x25318d0) 00:22:57.873 [2024-12-01 15:03:30.780141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:7513 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.873 [2024-12-01 15:03:30.780152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:57.873 [2024-12-01 15:03:30.788376] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x25318d0) 00:22:57.873 [2024-12-01 15:03:30.788405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:2275 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.873 [2024-12-01 15:03:30.788416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:57.873 [2024-12-01 15:03:30.799147] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x25318d0) 00:22:57.873 [2024-12-01 15:03:30.799186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22874 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.873 [2024-12-01 15:03:30.799202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:57.873 [2024-12-01 15:03:30.813487] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x25318d0) 00:22:57.873 [2024-12-01 15:03:30.813517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:21861 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.873 [2024-12-01 15:03:30.813530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:57.873 [2024-12-01 15:03:30.826657] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x25318d0) 00:22:57.873 [2024-12-01 15:03:30.826686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:727 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.873 [2024-12-01 15:03:30.826697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:57.873 [2024-12-01 15:03:30.838114] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x25318d0) 00:22:57.873 [2024-12-01 15:03:30.838144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:12052 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.873 [2024-12-01 15:03:30.838167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:57.873 [2024-12-01 15:03:30.848341] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x25318d0) 00:22:57.873 [2024-12-01 15:03:30.848382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:1933 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.873 [2024-12-01 15:03:30.848394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:57.873 [2024-12-01 15:03:30.858187] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x25318d0) 00:22:57.873 [2024-12-01 15:03:30.858216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:10967 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.873 [2024-12-01 15:03:30.858227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:57.873 [2024-12-01 15:03:30.870616] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x25318d0) 00:22:57.873 [2024-12-01 15:03:30.870646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:5542 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.873 [2024-12-01 15:03:30.870657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:57.873 [2024-12-01 15:03:30.881572] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x25318d0) 00:22:57.873 [2024-12-01 15:03:30.881601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:7701 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.873 [2024-12-01 15:03:30.881612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:57.873 [2024-12-01 15:03:30.891187] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x25318d0) 00:22:57.873 [2024-12-01 15:03:30.891217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:25251 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.873 [2024-12-01 15:03:30.891227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:57.873 [2024-12-01 15:03:30.904046] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x25318d0) 00:22:57.873 [2024-12-01 15:03:30.904076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:15658 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.873 [2024-12-01 15:03:30.904088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:57.873 [2024-12-01 15:03:30.917265] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x25318d0) 00:22:57.873 [2024-12-01 15:03:30.917295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:12628 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.873 [2024-12-01 15:03:30.917308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:57.873 [2024-12-01 15:03:30.928251] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x25318d0) 00:22:57.873 [2024-12-01 15:03:30.928280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:1234 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.873 [2024-12-01 15:03:30.928291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:57.873 [2024-12-01 15:03:30.938473] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x25318d0) 00:22:57.873 [2024-12-01 15:03:30.938503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:23234 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.873 [2024-12-01 15:03:30.938513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:57.873 [2024-12-01 15:03:30.948785] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x25318d0) 00:22:57.873 [2024-12-01 15:03:30.948824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20128 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.873 [2024-12-01 15:03:30.948835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:57.873 [2024-12-01 15:03:30.960211] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x25318d0) 00:22:57.873 [2024-12-01 15:03:30.960240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:7714 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.873 [2024-12-01 15:03:30.960251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:57.873 [2024-12-01 15:03:30.969146] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x25318d0) 00:22:57.873 [2024-12-01 15:03:30.969175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:7270 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.873 [2024-12-01 15:03:30.969186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:57.873 [2024-12-01 15:03:30.978932] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x25318d0) 00:22:57.873 [2024-12-01 15:03:30.978961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:14275 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.873 [2024-12-01 15:03:30.978971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:58.132 [2024-12-01 15:03:30.990384] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x25318d0) 00:22:58.132 [2024-12-01 15:03:30.990418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:9901 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.132 [2024-12-01 15:03:30.990430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:58.132 [2024-12-01 15:03:31.000053] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x25318d0) 00:22:58.132 [2024-12-01 15:03:31.000084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23295 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.132 [2024-12-01 15:03:31.000095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:58.132 [2024-12-01 15:03:31.009408] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x25318d0) 00:22:58.132 [2024-12-01 15:03:31.009473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:6684 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.132 [2024-12-01 15:03:31.009485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:58.132 [2024-12-01 15:03:31.018593] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x25318d0) 00:22:58.132 [2024-12-01 15:03:31.018622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:14124 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.132 [2024-12-01 15:03:31.018633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:58.132 [2024-12-01 15:03:31.027985] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x25318d0) 00:22:58.132 [2024-12-01 15:03:31.028027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25587 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.132 [2024-12-01 15:03:31.028037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:58.132 [2024-12-01 15:03:31.037400] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x25318d0) 00:22:58.132 [2024-12-01 15:03:31.037434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:1931 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.132 [2024-12-01 15:03:31.037445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:58.132 [2024-12-01 15:03:31.046784] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x25318d0) 00:22:58.132 [2024-12-01 15:03:31.046811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:24944 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.132 [2024-12-01 15:03:31.046822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:58.132 [2024-12-01 15:03:31.056138] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x25318d0) 00:22:58.132 [2024-12-01 15:03:31.056169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:24118 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.132 [2024-12-01 15:03:31.056180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:58.132 [2024-12-01 15:03:31.065353] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x25318d0) 00:22:58.132 [2024-12-01 15:03:31.065382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:21063 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.132 [2024-12-01 15:03:31.065393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:58.132 [2024-12-01 15:03:31.074665] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x25318d0) 00:22:58.132 [2024-12-01 15:03:31.074695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:22036 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.132 [2024-12-01 15:03:31.074705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:58.132 [2024-12-01 15:03:31.083982] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x25318d0) 00:22:58.132 [2024-12-01 15:03:31.084023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:14861 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.132 [2024-12-01 15:03:31.084034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:58.132 [2024-12-01 15:03:31.093331] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x25318d0) 00:22:58.132 [2024-12-01 15:03:31.093360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:18375 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.132 [2024-12-01 15:03:31.093371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:58.133 [2024-12-01 15:03:31.102943] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x25318d0) 00:22:58.133 [2024-12-01 15:03:31.102972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:5293 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.133 [2024-12-01 15:03:31.102983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:58.133 [2024-12-01 15:03:31.112022] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x25318d0) 00:22:58.133 [2024-12-01 15:03:31.112051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:9627 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.133 [2024-12-01 15:03:31.112062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:58.133 [2024-12-01 15:03:31.122896] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x25318d0) 00:22:58.133 [2024-12-01 15:03:31.122925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:20944 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.133 [2024-12-01 15:03:31.122935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:58.133 [2024-12-01 15:03:31.133082] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x25318d0) 00:22:58.133 [2024-12-01 15:03:31.133111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:8824 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.133 [2024-12-01 15:03:31.133122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:58.133 [2024-12-01 15:03:31.142214] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x25318d0) 00:22:58.133 [2024-12-01 15:03:31.142242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:18263 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.133 [2024-12-01 15:03:31.142253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:58.133 [2024-12-01 15:03:31.153524] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x25318d0) 00:22:58.133 [2024-12-01 15:03:31.153554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16336 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.133 [2024-12-01 15:03:31.153564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:58.133 [2024-12-01 15:03:31.163007] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x25318d0) 00:22:58.133 [2024-12-01 15:03:31.163035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:12995 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.133 [2024-12-01 15:03:31.163046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:58.133 [2024-12-01 15:03:31.172469] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x25318d0) 00:22:58.133 [2024-12-01 15:03:31.172498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:14991 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.133 [2024-12-01 15:03:31.172509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:58.133 [2024-12-01 15:03:31.181722] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x25318d0) 00:22:58.133 [2024-12-01 15:03:31.181763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:1449 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.133 [2024-12-01 15:03:31.181776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:58.133 [2024-12-01 15:03:31.193660] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x25318d0) 00:22:58.133 [2024-12-01 15:03:31.193689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:17169 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.133 [2024-12-01 15:03:31.193700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:58.133 [2024-12-01 15:03:31.205007] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x25318d0) 00:22:58.133 [2024-12-01 15:03:31.205036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:14625 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.133 [2024-12-01 15:03:31.205047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:58.133 [2024-12-01 15:03:31.215446] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x25318d0) 00:22:58.133 [2024-12-01 15:03:31.215475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:19462 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.133 [2024-12-01 15:03:31.215486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:58.133 [2024-12-01 15:03:31.226722] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x25318d0) 00:22:58.133 [2024-12-01 15:03:31.226761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:23607 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.133 [2024-12-01 15:03:31.226773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:58.133 [2024-12-01 15:03:31.236300] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x25318d0) 00:22:58.133 [2024-12-01 15:03:31.236331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:10349 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.133 [2024-12-01 15:03:31.236342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:58.391 [2024-12-01 15:03:31.247017] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x25318d0) 00:22:58.391 [2024-12-01 15:03:31.247048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:4440 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.391 [2024-12-01 15:03:31.247059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:58.391 [2024-12-01 15:03:31.257824] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x25318d0) 00:22:58.391 [2024-12-01 15:03:31.257854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:24706 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.391 [2024-12-01 15:03:31.257864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:58.391 [2024-12-01 15:03:31.267347] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x25318d0) 00:22:58.391 [2024-12-01 15:03:31.267378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:14021 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.391 [2024-12-01 15:03:31.267390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:58.391 [2024-12-01 15:03:31.277535] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x25318d0) 00:22:58.391 [2024-12-01 15:03:31.277564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:7590 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.391 [2024-12-01 15:03:31.277575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:58.391 [2024-12-01 15:03:31.288080] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x25318d0) 00:22:58.391 [2024-12-01 15:03:31.288110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:10753 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.391 [2024-12-01 15:03:31.288121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:58.391 [2024-12-01 15:03:31.296466] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x25318d0) 00:22:58.391 [2024-12-01 15:03:31.296497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24923 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.391 [2024-12-01 15:03:31.296508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:58.391 [2024-12-01 15:03:31.308240] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x25318d0) 00:22:58.391 [2024-12-01 15:03:31.308269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:2628 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.391 [2024-12-01 15:03:31.308280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:58.391 [2024-12-01 15:03:31.317974] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x25318d0) 00:22:58.391 [2024-12-01 15:03:31.318003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:810 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.391 [2024-12-01 15:03:31.318014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:58.391 [2024-12-01 15:03:31.326482] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x25318d0) 00:22:58.391 [2024-12-01 15:03:31.326510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:12777 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.391 [2024-12-01 15:03:31.326521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:58.391 [2024-12-01 15:03:31.335679] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x25318d0) 00:22:58.391 [2024-12-01 15:03:31.335708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:23960 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.392 [2024-12-01 15:03:31.335720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:58.392 [2024-12-01 15:03:31.346570] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x25318d0) 00:22:58.392 [2024-12-01 15:03:31.346599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:18138 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.392 [2024-12-01 15:03:31.346610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:58.392 [2024-12-01 15:03:31.357976] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x25318d0) 00:22:58.392 [2024-12-01 15:03:31.358006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12823 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.392 [2024-12-01 15:03:31.358017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:58.392 [2024-12-01 15:03:31.368588] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x25318d0) 00:22:58.392 [2024-12-01 15:03:31.368617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:11786 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.392 [2024-12-01 15:03:31.368628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:58.392 [2024-12-01 15:03:31.380886] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x25318d0) 00:22:58.392 [2024-12-01 15:03:31.380915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:25587 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.392 [2024-12-01 15:03:31.380925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:58.392 [2024-12-01 15:03:31.390393] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x25318d0) 00:22:58.392 [2024-12-01 15:03:31.390422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:23002 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.392 [2024-12-01 15:03:31.390433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:58.392 [2024-12-01 15:03:31.399824] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x25318d0) 00:22:58.392 [2024-12-01 15:03:31.399864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:4671 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.392 [2024-12-01 15:03:31.399874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:58.392 [2024-12-01 15:03:31.410630] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x25318d0) 00:22:58.392 [2024-12-01 15:03:31.410659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:18321 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.392 [2024-12-01 15:03:31.410670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:58.392 [2024-12-01 15:03:31.421084] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x25318d0) 00:22:58.392 [2024-12-01 15:03:31.421137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:18640 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.392 [2024-12-01 15:03:31.421152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:58.392 [2024-12-01 15:03:31.435321] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x25318d0) 00:22:58.392 [2024-12-01 15:03:31.435352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:7111 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.392 [2024-12-01 15:03:31.435364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:58.392 [2024-12-01 15:03:31.447275] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x25318d0) 00:22:58.392 [2024-12-01 15:03:31.447305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:19489 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.392 [2024-12-01 15:03:31.447316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:58.392 [2024-12-01 15:03:31.456127] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x25318d0) 00:22:58.392 [2024-12-01 15:03:31.456157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:7641 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.392 [2024-12-01 15:03:31.456167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:58.392 [2024-12-01 15:03:31.465292] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x25318d0) 00:22:58.392 [2024-12-01 15:03:31.465322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:13385 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.392 [2024-12-01 15:03:31.465333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:58.392 [2024-12-01 15:03:31.475762] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x25318d0) 00:22:58.392 [2024-12-01 15:03:31.475802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:18005 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.392 [2024-12-01 15:03:31.475813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:58.392 [2024-12-01 15:03:31.485134] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x25318d0) 00:22:58.392 [2024-12-01 15:03:31.485163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:18155 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.392 [2024-12-01 15:03:31.485174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:58.392 [2024-12-01 15:03:31.495054] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x25318d0) 00:22:58.392 [2024-12-01 15:03:31.495083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24685 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.392 [2024-12-01 15:03:31.495093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:58.650 [2024-12-01 15:03:31.505186] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x25318d0) 00:22:58.650 [2024-12-01 15:03:31.505217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:11301 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.650 [2024-12-01 15:03:31.505229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:58.650 [2024-12-01 15:03:31.517312] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x25318d0) 00:22:58.650 [2024-12-01 15:03:31.517342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:6758 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.650 [2024-12-01 15:03:31.517353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:58.650 [2024-12-01 15:03:31.529004] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x25318d0) 00:22:58.650 [2024-12-01 15:03:31.529033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:15999 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.650 [2024-12-01 15:03:31.529045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:58.650 [2024-12-01 15:03:31.541303] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x25318d0) 00:22:58.650 [2024-12-01 15:03:31.541333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:11528 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.650 [2024-12-01 15:03:31.541344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:58.650 [2024-12-01 15:03:31.552140] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x25318d0) 00:22:58.650 [2024-12-01 15:03:31.552169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:14811 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.650 [2024-12-01 15:03:31.552180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:58.650 [2024-12-01 15:03:31.561184] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x25318d0) 00:22:58.650 [2024-12-01 15:03:31.561213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:4623 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.650 [2024-12-01 15:03:31.561240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:58.650 [2024-12-01 15:03:31.571502] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x25318d0) 00:22:58.650 [2024-12-01 15:03:31.571533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:24938 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.650 [2024-12-01 15:03:31.571544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:58.650 [2024-12-01 15:03:31.580626] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x25318d0) 00:22:58.650 [2024-12-01 15:03:31.580655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:17816 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.650 [2024-12-01 15:03:31.580666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:58.650 [2024-12-01 15:03:31.591824] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x25318d0) 00:22:58.650 [2024-12-01 15:03:31.591852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:15507 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.650 [2024-12-01 15:03:31.591863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:58.650 [2024-12-01 15:03:31.603004] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x25318d0) 00:22:58.650 [2024-12-01 15:03:31.603034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:17044 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.650 [2024-12-01 15:03:31.603045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:58.650 [2024-12-01 15:03:31.612572] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x25318d0) 00:22:58.650 [2024-12-01 15:03:31.612602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:4845 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.650 [2024-12-01 15:03:31.612613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:58.650 [2024-12-01 15:03:31.624115] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x25318d0) 00:22:58.650 [2024-12-01 15:03:31.624144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:9783 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.650 [2024-12-01 15:03:31.624157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:58.650 [2024-12-01 15:03:31.634948] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x25318d0) 00:22:58.650 [2024-12-01 15:03:31.634977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:24357 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.650 [2024-12-01 15:03:31.634988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:58.651 [2024-12-01 15:03:31.644047] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x25318d0) 00:22:58.651 [2024-12-01 15:03:31.644077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:13927 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.651 [2024-12-01 15:03:31.644088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:58.651 [2024-12-01 15:03:31.654197] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x25318d0) 00:22:58.651 [2024-12-01 15:03:31.654226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:13559 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.651 [2024-12-01 15:03:31.654237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:58.651 [2024-12-01 15:03:31.663872] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x25318d0) 00:22:58.651 [2024-12-01 15:03:31.663912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:20272 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.651 [2024-12-01 15:03:31.663924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:58.651 [2024-12-01 15:03:31.673931] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x25318d0) 00:22:58.651 [2024-12-01 15:03:31.673981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:5655 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.651 [2024-12-01 15:03:31.673993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:58.651 [2024-12-01 15:03:31.683504] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x25318d0) 00:22:58.651 [2024-12-01 15:03:31.683534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:14125 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.651 [2024-12-01 15:03:31.683545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:58.651 [2024-12-01 15:03:31.693036] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x25318d0) 00:22:58.651 [2024-12-01 15:03:31.693065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:1575 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.651 [2024-12-01 15:03:31.693075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:58.651 [2024-12-01 15:03:31.701567] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x25318d0) 00:22:58.651 [2024-12-01 15:03:31.701596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:3838 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.651 [2024-12-01 15:03:31.701607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:58.651 [2024-12-01 15:03:31.711875] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x25318d0) 00:22:58.651 [2024-12-01 15:03:31.711915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:21922 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.651 [2024-12-01 15:03:31.711926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:58.651 [2024-12-01 15:03:31.723261] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x25318d0) 00:22:58.651 [2024-12-01 15:03:31.723291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:25527 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.651 [2024-12-01 15:03:31.723302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:58.651 [2024-12-01 15:03:31.732594] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x25318d0) 00:22:58.651 [2024-12-01 15:03:31.732623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:12238 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.651 [2024-12-01 15:03:31.732634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:58.651 [2024-12-01 15:03:31.742086] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x25318d0) 00:22:58.651 [2024-12-01 15:03:31.742114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:18510 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.651 [2024-12-01 15:03:31.742125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:58.651 [2024-12-01 15:03:31.754844] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x25318d0) 00:22:58.651 [2024-12-01 15:03:31.754872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:15276 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.651 [2024-12-01 15:03:31.754883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:58.909 [2024-12-01 15:03:31.766431] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x25318d0) 00:22:58.909 [2024-12-01 15:03:31.766464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:12812 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.909 [2024-12-01 15:03:31.766475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:58.909 [2024-12-01 15:03:31.778391] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x25318d0) 00:22:58.909 [2024-12-01 15:03:31.778421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:3050 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.909 [2024-12-01 15:03:31.778433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:58.910 [2024-12-01 15:03:31.787322] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x25318d0) 00:22:58.910 [2024-12-01 15:03:31.787351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:23548 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.910 [2024-12-01 15:03:31.787362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:58.910 [2024-12-01 15:03:31.799339] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x25318d0) 00:22:58.910 [2024-12-01 15:03:31.799370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:12749 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.910 [2024-12-01 15:03:31.799381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:58.910 [2024-12-01 15:03:31.808688] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x25318d0) 00:22:58.910 [2024-12-01 15:03:31.808718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:9565 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.910 [2024-12-01 15:03:31.808730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:58.910 [2024-12-01 15:03:31.818785] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x25318d0) 00:22:58.910 [2024-12-01 15:03:31.818814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:10027 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.910 [2024-12-01 15:03:31.818824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:58.910 [2024-12-01 15:03:31.829951] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x25318d0) 00:22:58.910 [2024-12-01 15:03:31.829981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:10002 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.910 [2024-12-01 15:03:31.829992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:58.910 [2024-12-01 15:03:31.841430] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x25318d0) 00:22:58.910 [2024-12-01 15:03:31.841470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:15182 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.910 [2024-12-01 15:03:31.841481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:58.910 [2024-12-01 15:03:31.851557] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x25318d0) 00:22:58.910 [2024-12-01 15:03:31.851586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17422 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.910 [2024-12-01 15:03:31.851597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:58.910 [2024-12-01 15:03:31.862226] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x25318d0) 00:22:58.910 [2024-12-01 15:03:31.862255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:19184 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.910 [2024-12-01 15:03:31.862266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:58.910 [2024-12-01 15:03:31.871066] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x25318d0) 00:22:58.910 [2024-12-01 15:03:31.871095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:20683 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.910 [2024-12-01 15:03:31.871106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:58.910 [2024-12-01 15:03:31.881306] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x25318d0) 00:22:58.910 [2024-12-01 15:03:31.881336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:20261 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.910 [2024-12-01 15:03:31.881346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:58.910 [2024-12-01 15:03:31.890675] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x25318d0) 00:22:58.910 [2024-12-01 15:03:31.890705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:16819 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.910 [2024-12-01 15:03:31.890716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:58.910 [2024-12-01 15:03:31.899914] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x25318d0) 00:22:58.910 [2024-12-01 15:03:31.899942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:12106 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.910 [2024-12-01 15:03:31.899953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:58.910 [2024-12-01 15:03:31.912100] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x25318d0) 00:22:58.910 [2024-12-01 15:03:31.912129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:9364 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.910 [2024-12-01 15:03:31.912140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:58.910 [2024-12-01 15:03:31.924762] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x25318d0) 00:22:58.910 [2024-12-01 15:03:31.924801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:17913 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.910 [2024-12-01 15:03:31.924812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:58.910 [2024-12-01 15:03:31.936502] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x25318d0) 00:22:58.910 [2024-12-01 15:03:31.936531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:7439 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.910 [2024-12-01 15:03:31.936542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:58.910 [2024-12-01 15:03:31.948255] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x25318d0) 00:22:58.910 [2024-12-01 15:03:31.948296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:14583 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.910 [2024-12-01 15:03:31.948307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:58.910 [2024-12-01 15:03:31.957943] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x25318d0) 00:22:58.910 [2024-12-01 15:03:31.957973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:6152 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.910 [2024-12-01 15:03:31.957983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:58.910 [2024-12-01 15:03:31.969311] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x25318d0) 00:22:58.910 [2024-12-01 15:03:31.969352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:19490 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.910 [2024-12-01 15:03:31.969362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:58.910 [2024-12-01 15:03:31.982919] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x25318d0) 00:22:58.910 [2024-12-01 15:03:31.982961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:4629 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.910 [2024-12-01 15:03:31.982972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:58.910 [2024-12-01 15:03:31.994809] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x25318d0) 00:22:58.910 [2024-12-01 15:03:31.994837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:18279 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.910 [2024-12-01 15:03:31.994847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:58.910 [2024-12-01 15:03:32.004527] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x25318d0) 00:22:58.910 [2024-12-01 15:03:32.004556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:16626 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.910 [2024-12-01 15:03:32.004568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:58.910 [2024-12-01 15:03:32.015037] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x25318d0) 00:22:58.910 [2024-12-01 15:03:32.015066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:22441 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.910 [2024-12-01 15:03:32.015077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:59.169 [2024-12-01 15:03:32.026720] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x25318d0) 00:22:59.169 [2024-12-01 15:03:32.026777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:25579 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.169 [2024-12-01 15:03:32.026790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:59.169 [2024-12-01 15:03:32.036968] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x25318d0) 00:22:59.169 [2024-12-01 15:03:32.036999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:6841 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.169 [2024-12-01 15:03:32.037010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:59.169 [2024-12-01 15:03:32.046931] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x25318d0) 00:22:59.169 [2024-12-01 15:03:32.046960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:19278 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.169 [2024-12-01 15:03:32.046970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:59.169 [2024-12-01 15:03:32.057387] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x25318d0) 00:22:59.169 [2024-12-01 15:03:32.057435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:25156 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.169 [2024-12-01 15:03:32.057463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:59.169 [2024-12-01 15:03:32.068871] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x25318d0) 00:22:59.169 [2024-12-01 15:03:32.068900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:4832 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.169 [2024-12-01 15:03:32.068912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:59.169 [2024-12-01 15:03:32.077066] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x25318d0) 00:22:59.169 [2024-12-01 15:03:32.077095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:4542 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.169 [2024-12-01 15:03:32.077106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:59.169 [2024-12-01 15:03:32.089111] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x25318d0) 00:22:59.169 [2024-12-01 15:03:32.089153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:24391 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.169 [2024-12-01 15:03:32.089164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:59.169 [2024-12-01 15:03:32.100244] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x25318d0) 00:22:59.169 [2024-12-01 15:03:32.100274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:1883 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.169 [2024-12-01 15:03:32.100284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:59.169 [2024-12-01 15:03:32.110292] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x25318d0) 00:22:59.169 [2024-12-01 15:03:32.110323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:10876 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.169 [2024-12-01 15:03:32.110334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:59.169 [2024-12-01 15:03:32.120721] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x25318d0) 00:22:59.169 [2024-12-01 15:03:32.120760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:14324 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.169 [2024-12-01 15:03:32.120773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:59.169 [2024-12-01 15:03:32.131523] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x25318d0) 00:22:59.169 [2024-12-01 15:03:32.131552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:1850 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.169 [2024-12-01 15:03:32.131564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:59.169 [2024-12-01 15:03:32.141061] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x25318d0) 00:22:59.169 [2024-12-01 15:03:32.141089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:12805 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.169 [2024-12-01 15:03:32.141100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:59.169 [2024-12-01 15:03:32.150170] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x25318d0) 00:22:59.169 [2024-12-01 15:03:32.150199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:11121 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.169 [2024-12-01 15:03:32.150210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:59.169 [2024-12-01 15:03:32.160319] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x25318d0) 00:22:59.169 [2024-12-01 15:03:32.160349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:7053 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.169 [2024-12-01 15:03:32.160359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:59.169 [2024-12-01 15:03:32.169220] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x25318d0) 00:22:59.169 [2024-12-01 15:03:32.169249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:10673 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.169 [2024-12-01 15:03:32.169260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:59.169 [2024-12-01 15:03:32.179020] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x25318d0) 00:22:59.169 [2024-12-01 15:03:32.179050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:24329 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.169 [2024-12-01 15:03:32.179060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:59.169 [2024-12-01 15:03:32.189133] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x25318d0) 00:22:59.169 [2024-12-01 15:03:32.189162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:7585 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.169 [2024-12-01 15:03:32.189173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:59.169 [2024-12-01 15:03:32.198221] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x25318d0) 00:22:59.169 [2024-12-01 15:03:32.198250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:7012 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.169 [2024-12-01 15:03:32.198260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:59.169 [2024-12-01 15:03:32.207761] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x25318d0) 00:22:59.169 [2024-12-01 15:03:32.207789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:14184 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.169 [2024-12-01 15:03:32.207800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:59.169 [2024-12-01 15:03:32.219028] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x25318d0) 00:22:59.169 [2024-12-01 15:03:32.219069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:24288 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.169 [2024-12-01 15:03:32.219079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:59.169 [2024-12-01 15:03:32.227434] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x25318d0) 00:22:59.169 [2024-12-01 15:03:32.227463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:20001 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.169 [2024-12-01 15:03:32.227474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:59.169 [2024-12-01 15:03:32.238912] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x25318d0) 00:22:59.170 [2024-12-01 15:03:32.238953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:10508 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.170 [2024-12-01 15:03:32.238965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:59.170 [2024-12-01 15:03:32.250669] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x25318d0) 00:22:59.170 [2024-12-01 15:03:32.250698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:330 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.170 [2024-12-01 15:03:32.250709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:59.170 00:22:59.170 Latency(us) 00:22:59.170 [2024-12-01T15:03:32.285Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:59.170 [2024-12-01T15:03:32.285Z] Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:22:59.170 nvme0n1 : 2.00 24214.27 94.59 0.00 0.00 5281.53 2323.55 17396.83 00:22:59.170 [2024-12-01T15:03:32.285Z] =================================================================================================================== 00:22:59.170 [2024-12-01T15:03:32.285Z] Total : 24214.27 94.59 0.00 0.00 5281.53 2323.55 17396.83 00:22:59.170 0 00:22:59.170 15:03:32 -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:22:59.170 15:03:32 -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:22:59.170 15:03:32 -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:22:59.170 | .driver_specific 00:22:59.170 | .nvme_error 00:22:59.170 | .status_code 00:22:59.170 | .command_transient_transport_error' 00:22:59.170 15:03:32 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:22:59.427 15:03:32 -- host/digest.sh@71 -- # (( 190 > 0 )) 00:22:59.427 15:03:32 -- host/digest.sh@73 -- # killprocess 97806 00:22:59.427 15:03:32 -- common/autotest_common.sh@936 -- # '[' -z 97806 ']' 00:22:59.427 15:03:32 -- common/autotest_common.sh@940 -- # kill -0 97806 00:22:59.427 15:03:32 -- common/autotest_common.sh@941 -- # uname 00:22:59.427 15:03:32 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:22:59.427 15:03:32 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 97806 00:22:59.686 15:03:32 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:22:59.686 15:03:32 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:22:59.686 killing process with pid 97806 00:22:59.686 15:03:32 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 97806' 00:22:59.686 15:03:32 -- common/autotest_common.sh@955 -- # kill 97806 00:22:59.686 Received shutdown signal, test time was about 2.000000 seconds 00:22:59.686 00:22:59.686 Latency(us) 00:22:59.686 [2024-12-01T15:03:32.801Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:59.686 [2024-12-01T15:03:32.801Z] =================================================================================================================== 00:22:59.686 [2024-12-01T15:03:32.801Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:22:59.686 15:03:32 -- common/autotest_common.sh@960 -- # wait 97806 00:22:59.686 15:03:32 -- host/digest.sh@108 -- # run_bperf_err randread 131072 16 00:22:59.686 15:03:32 -- host/digest.sh@54 -- # local rw bs qd 00:22:59.686 15:03:32 -- host/digest.sh@56 -- # rw=randread 00:22:59.686 15:03:32 -- host/digest.sh@56 -- # bs=131072 00:22:59.686 15:03:32 -- host/digest.sh@56 -- # qd=16 00:22:59.686 15:03:32 -- host/digest.sh@58 -- # bperfpid=97892 00:22:59.686 15:03:32 -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z 00:22:59.686 15:03:32 -- host/digest.sh@60 -- # waitforlisten 97892 /var/tmp/bperf.sock 00:22:59.686 15:03:32 -- common/autotest_common.sh@829 -- # '[' -z 97892 ']' 00:22:59.686 15:03:32 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:22:59.686 15:03:32 -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:59.686 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:22:59.686 15:03:32 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:22:59.686 15:03:32 -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:59.686 15:03:32 -- common/autotest_common.sh@10 -- # set +x 00:22:59.686 I/O size of 131072 is greater than zero copy threshold (65536). 00:22:59.686 Zero copy mechanism will not be used. 00:22:59.686 [2024-12-01 15:03:32.779847] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:22:59.686 [2024-12-01 15:03:32.779933] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid97892 ] 00:22:59.944 [2024-12-01 15:03:32.907969] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:59.944 [2024-12-01 15:03:32.961869] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:23:00.877 15:03:33 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:00.877 15:03:33 -- common/autotest_common.sh@862 -- # return 0 00:23:00.877 15:03:33 -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:23:00.877 15:03:33 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:23:00.877 15:03:33 -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:23:00.877 15:03:33 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:00.877 15:03:33 -- common/autotest_common.sh@10 -- # set +x 00:23:00.877 15:03:33 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:00.877 15:03:33 -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:23:00.878 15:03:33 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:23:01.135 nvme0n1 00:23:01.135 15:03:34 -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:23:01.135 15:03:34 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:01.135 15:03:34 -- common/autotest_common.sh@10 -- # set +x 00:23:01.135 15:03:34 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:01.135 15:03:34 -- host/digest.sh@69 -- # bperf_py perform_tests 00:23:01.135 15:03:34 -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:23:01.395 I/O size of 131072 is greater than zero copy threshold (65536). 00:23:01.395 Zero copy mechanism will not be used. 00:23:01.395 Running I/O for 2 seconds... 00:23:01.395 [2024-12-01 15:03:34.297239] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x124dd10) 00:23:01.395 [2024-12-01 15:03:34.297291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.395 [2024-12-01 15:03:34.297304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:01.395 [2024-12-01 15:03:34.302366] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x124dd10) 00:23:01.395 [2024-12-01 15:03:34.302397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.395 [2024-12-01 15:03:34.302409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:01.395 [2024-12-01 15:03:34.305636] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x124dd10) 00:23:01.395 [2024-12-01 15:03:34.305668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:5728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.395 [2024-12-01 15:03:34.305680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:01.395 [2024-12-01 15:03:34.309963] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x124dd10) 00:23:01.395 [2024-12-01 15:03:34.309993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:15808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.395 [2024-12-01 15:03:34.310004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:01.395 [2024-12-01 15:03:34.313853] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x124dd10) 00:23:01.395 [2024-12-01 15:03:34.313879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:5184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.395 [2024-12-01 15:03:34.313890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:01.395 [2024-12-01 15:03:34.317868] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x124dd10) 00:23:01.395 [2024-12-01 15:03:34.317909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:18208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.395 [2024-12-01 15:03:34.317920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:01.395 [2024-12-01 15:03:34.321673] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x124dd10) 00:23:01.395 [2024-12-01 15:03:34.321701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.395 [2024-12-01 15:03:34.321713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:01.395 [2024-12-01 15:03:34.325914] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x124dd10) 00:23:01.395 [2024-12-01 15:03:34.325941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.395 [2024-12-01 15:03:34.325954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:01.395 [2024-12-01 15:03:34.329988] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x124dd10) 00:23:01.395 [2024-12-01 15:03:34.330017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:1824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.395 [2024-12-01 15:03:34.330028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:01.395 [2024-12-01 15:03:34.333853] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x124dd10) 00:23:01.395 [2024-12-01 15:03:34.333902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.395 [2024-12-01 15:03:34.333913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:01.395 [2024-12-01 15:03:34.337849] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x124dd10) 00:23:01.395 [2024-12-01 15:03:34.337877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.395 [2024-12-01 15:03:34.337890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:01.395 [2024-12-01 15:03:34.341416] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x124dd10) 00:23:01.395 [2024-12-01 15:03:34.341454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.395 [2024-12-01 15:03:34.341476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:01.395 [2024-12-01 15:03:34.345111] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x124dd10) 00:23:01.395 [2024-12-01 15:03:34.345154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.395 [2024-12-01 15:03:34.345165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:01.395 [2024-12-01 15:03:34.348930] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x124dd10) 00:23:01.395 [2024-12-01 15:03:34.348960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:9088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.395 [2024-12-01 15:03:34.348970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:01.395 [2024-12-01 15:03:34.352843] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x124dd10) 00:23:01.395 [2024-12-01 15:03:34.352873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:6144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.395 [2024-12-01 15:03:34.352884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:01.395 [2024-12-01 15:03:34.356787] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x124dd10) 00:23:01.395 [2024-12-01 15:03:34.356826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.395 [2024-12-01 15:03:34.356838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:01.395 [2024-12-01 15:03:34.360666] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x124dd10) 00:23:01.395 [2024-12-01 15:03:34.360707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.395 [2024-12-01 15:03:34.360717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:01.395 [2024-12-01 15:03:34.365132] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x124dd10) 00:23:01.395 [2024-12-01 15:03:34.365159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:20576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.395 [2024-12-01 15:03:34.365171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:01.395 [2024-12-01 15:03:34.368847] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x124dd10) 00:23:01.395 [2024-12-01 15:03:34.368886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:6240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.395 [2024-12-01 15:03:34.368896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:01.395 [2024-12-01 15:03:34.372522] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x124dd10) 00:23:01.395 [2024-12-01 15:03:34.372553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.395 [2024-12-01 15:03:34.372563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:01.395 [2024-12-01 15:03:34.376305] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x124dd10) 00:23:01.396 [2024-12-01 15:03:34.376347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:10848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.396 [2024-12-01 15:03:34.376358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:01.396 [2024-12-01 15:03:34.380502] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x124dd10) 00:23:01.396 [2024-12-01 15:03:34.380531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:24256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.396 [2024-12-01 15:03:34.380541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:01.396 [2024-12-01 15:03:34.383733] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x124dd10) 00:23:01.396 [2024-12-01 15:03:34.383786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:4352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.396 [2024-12-01 15:03:34.383797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:01.396 [2024-12-01 15:03:34.386948] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x124dd10) 00:23:01.396 [2024-12-01 15:03:34.386976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:19744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.396 [2024-12-01 15:03:34.386987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:01.396 [2024-12-01 15:03:34.391016] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x124dd10) 00:23:01.396 [2024-12-01 15:03:34.391044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:3008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.396 [2024-12-01 15:03:34.391056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:01.396 [2024-12-01 15:03:34.395381] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x124dd10) 00:23:01.396 [2024-12-01 15:03:34.395409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:23776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.396 [2024-12-01 15:03:34.395422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:01.396 [2024-12-01 15:03:34.399706] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x124dd10) 00:23:01.396 [2024-12-01 15:03:34.399734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:9344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.396 [2024-12-01 15:03:34.399747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:01.396 [2024-12-01 15:03:34.403220] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x124dd10) 00:23:01.396 [2024-12-01 15:03:34.403248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:18432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.396 [2024-12-01 15:03:34.403260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:01.396 [2024-12-01 15:03:34.406374] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x124dd10) 00:23:01.396 [2024-12-01 15:03:34.406402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.396 [2024-12-01 15:03:34.406414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:01.396 [2024-12-01 15:03:34.410842] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x124dd10) 00:23:01.396 [2024-12-01 15:03:34.410870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:4160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.396 [2024-12-01 15:03:34.410879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:01.396 [2024-12-01 15:03:34.414870] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x124dd10) 00:23:01.396 [2024-12-01 15:03:34.414898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:21280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.396 [2024-12-01 15:03:34.414910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:01.396 [2024-12-01 15:03:34.418914] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x124dd10) 00:23:01.396 [2024-12-01 15:03:34.418941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:21632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.396 [2024-12-01 15:03:34.418953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:01.396 [2024-12-01 15:03:34.422837] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x124dd10) 00:23:01.396 [2024-12-01 15:03:34.422864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:8896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.396 [2024-12-01 15:03:34.422875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:01.396 [2024-12-01 15:03:34.427206] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x124dd10) 00:23:01.396 [2024-12-01 15:03:34.427234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:3968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.396 [2024-12-01 15:03:34.427247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:01.396 [2024-12-01 15:03:34.430828] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x124dd10) 00:23:01.396 [2024-12-01 15:03:34.430855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:1920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.396 [2024-12-01 15:03:34.430868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:01.396 [2024-12-01 15:03:34.434819] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x124dd10) 00:23:01.396 [2024-12-01 15:03:34.434859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:2272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.396 [2024-12-01 15:03:34.434870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:01.396 [2024-12-01 15:03:34.439016] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x124dd10) 00:23:01.396 [2024-12-01 15:03:34.439044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.396 [2024-12-01 15:03:34.439054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:01.396 [2024-12-01 15:03:34.443009] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x124dd10) 00:23:01.396 [2024-12-01 15:03:34.443037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:10176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.396 [2024-12-01 15:03:34.443049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:01.396 [2024-12-01 15:03:34.446794] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x124dd10) 00:23:01.396 [2024-12-01 15:03:34.446822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:1536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.396 [2024-12-01 15:03:34.446834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:01.396 [2024-12-01 15:03:34.450444] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x124dd10) 00:23:01.396 [2024-12-01 15:03:34.450485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:19648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.396 [2024-12-01 15:03:34.450496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:01.396 [2024-12-01 15:03:34.454618] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x124dd10) 00:23:01.396 [2024-12-01 15:03:34.454647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:11328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.396 [2024-12-01 15:03:34.454658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:01.396 [2024-12-01 15:03:34.458943] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x124dd10) 00:23:01.396 [2024-12-01 15:03:34.458984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:14272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.396 [2024-12-01 15:03:34.458995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:01.396 [2024-12-01 15:03:34.462972] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x124dd10) 00:23:01.396 [2024-12-01 15:03:34.463000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:21760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.396 [2024-12-01 15:03:34.463011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:01.396 [2024-12-01 15:03:34.466818] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x124dd10) 00:23:01.396 [2024-12-01 15:03:34.466846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:1568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.396 [2024-12-01 15:03:34.466856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:01.396 [2024-12-01 15:03:34.470222] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x124dd10) 00:23:01.396 [2024-12-01 15:03:34.470250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:10560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.396 [2024-12-01 15:03:34.470261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:01.396 [2024-12-01 15:03:34.473893] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x124dd10) 00:23:01.396 [2024-12-01 15:03:34.473922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:14464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.396 [2024-12-01 15:03:34.473933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:01.396 [2024-12-01 15:03:34.477674] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x124dd10) 00:23:01.396 [2024-12-01 15:03:34.477703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:13536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.397 [2024-12-01 15:03:34.477715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:01.397 [2024-12-01 15:03:34.481315] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x124dd10) 00:23:01.397 [2024-12-01 15:03:34.481343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:4352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.397 [2024-12-01 15:03:34.481354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:01.397 [2024-12-01 15:03:34.485843] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x124dd10) 00:23:01.397 [2024-12-01 15:03:34.485870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:16288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.397 [2024-12-01 15:03:34.485881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:01.397 [2024-12-01 15:03:34.489600] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x124dd10) 00:23:01.397 [2024-12-01 15:03:34.489630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:13568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.397 [2024-12-01 15:03:34.489641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:01.397 [2024-12-01 15:03:34.494158] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x124dd10) 00:23:01.397 [2024-12-01 15:03:34.494187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:13344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.397 [2024-12-01 15:03:34.494197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:01.397 [2024-12-01 15:03:34.498662] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x124dd10) 00:23:01.397 [2024-12-01 15:03:34.498692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:11136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.397 [2024-12-01 15:03:34.498703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:01.397 [2024-12-01 15:03:34.502426] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x124dd10) 00:23:01.397 [2024-12-01 15:03:34.502455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:21728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.397 [2024-12-01 15:03:34.502465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:01.397 [2024-12-01 15:03:34.506284] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x124dd10) 00:23:01.397 [2024-12-01 15:03:34.506334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:1600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.397 [2024-12-01 15:03:34.506346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:01.657 [2024-12-01 15:03:34.510115] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x124dd10) 00:23:01.657 [2024-12-01 15:03:34.510147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:15168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.657 [2024-12-01 15:03:34.510158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:01.657 [2024-12-01 15:03:34.514263] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x124dd10) 00:23:01.657 [2024-12-01 15:03:34.514294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.657 [2024-12-01 15:03:34.514305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:01.657 [2024-12-01 15:03:34.518108] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x124dd10) 00:23:01.657 [2024-12-01 15:03:34.518139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:22656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.657 [2024-12-01 15:03:34.518149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:01.657 [2024-12-01 15:03:34.521717] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x124dd10) 00:23:01.657 [2024-12-01 15:03:34.521789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:8992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.657 [2024-12-01 15:03:34.521802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:01.657 [2024-12-01 15:03:34.525001] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x124dd10) 00:23:01.657 [2024-12-01 15:03:34.525043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:23712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.657 [2024-12-01 15:03:34.525054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:01.657 [2024-12-01 15:03:34.528393] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x124dd10) 00:23:01.657 [2024-12-01 15:03:34.528435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:16480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.657 [2024-12-01 15:03:34.528446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:01.657 [2024-12-01 15:03:34.532321] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x124dd10) 00:23:01.657 [2024-12-01 15:03:34.532350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:2016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.658 [2024-12-01 15:03:34.532361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:01.658 [2024-12-01 15:03:34.535864] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x124dd10) 00:23:01.658 [2024-12-01 15:03:34.535893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:22656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.658 [2024-12-01 15:03:34.535904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:01.658 [2024-12-01 15:03:34.539314] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x124dd10) 00:23:01.658 [2024-12-01 15:03:34.539343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.658 [2024-12-01 15:03:34.539354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:01.658 [2024-12-01 15:03:34.543090] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x124dd10) 00:23:01.658 [2024-12-01 15:03:34.543118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:14496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.658 [2024-12-01 15:03:34.543129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:01.658 [2024-12-01 15:03:34.546338] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x124dd10) 00:23:01.658 [2024-12-01 15:03:34.546367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:2656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.658 [2024-12-01 15:03:34.546377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:01.658 [2024-12-01 15:03:34.549510] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x124dd10) 00:23:01.658 [2024-12-01 15:03:34.549539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:22880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.658 [2024-12-01 15:03:34.549549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:01.658 [2024-12-01 15:03:34.552706] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x124dd10) 00:23:01.658 [2024-12-01 15:03:34.552735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:7680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.658 [2024-12-01 15:03:34.552746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:01.658 [2024-12-01 15:03:34.556628] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x124dd10) 00:23:01.658 [2024-12-01 15:03:34.556658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:19488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.658 [2024-12-01 15:03:34.556668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:01.658 [2024-12-01 15:03:34.560015] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x124dd10) 00:23:01.658 [2024-12-01 15:03:34.560044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:19840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.658 [2024-12-01 15:03:34.560055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:01.658 [2024-12-01 15:03:34.563635] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x124dd10) 00:23:01.658 [2024-12-01 15:03:34.563664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:1696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.658 [2024-12-01 15:03:34.563675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:01.658 [2024-12-01 15:03:34.567545] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x124dd10) 00:23:01.658 [2024-12-01 15:03:34.567573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:7392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.658 [2024-12-01 15:03:34.567584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:01.658 [2024-12-01 15:03:34.571887] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x124dd10) 00:23:01.658 [2024-12-01 15:03:34.571914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:15072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.658 [2024-12-01 15:03:34.571934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:01.658 [2024-12-01 15:03:34.575591] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x124dd10) 00:23:01.658 [2024-12-01 15:03:34.575619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:8352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.658 [2024-12-01 15:03:34.575630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:01.658 [2024-12-01 15:03:34.579308] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x124dd10) 00:23:01.658 [2024-12-01 15:03:34.579336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:8992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.658 [2024-12-01 15:03:34.579346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:01.658 [2024-12-01 15:03:34.582671] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x124dd10) 00:23:01.658 [2024-12-01 15:03:34.582701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:18368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.658 [2024-12-01 15:03:34.582712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:01.658 [2024-12-01 15:03:34.586331] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x124dd10) 00:23:01.658 [2024-12-01 15:03:34.586361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:7840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.658 [2024-12-01 15:03:34.586371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:01.658 [2024-12-01 15:03:34.589748] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x124dd10) 00:23:01.658 [2024-12-01 15:03:34.589817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:18784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.658 [2024-12-01 15:03:34.589828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:01.658 [2024-12-01 15:03:34.593160] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x124dd10) 00:23:01.658 [2024-12-01 15:03:34.593188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:1024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.658 [2024-12-01 15:03:34.593200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:01.658 [2024-12-01 15:03:34.597621] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x124dd10) 00:23:01.658 [2024-12-01 15:03:34.597665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:21248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.658 [2024-12-01 15:03:34.597677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:01.658 [2024-12-01 15:03:34.601348] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x124dd10) 00:23:01.658 [2024-12-01 15:03:34.601377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.658 [2024-12-01 15:03:34.601387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:01.658 [2024-12-01 15:03:34.605872] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x124dd10) 00:23:01.658 [2024-12-01 15:03:34.605900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.659 [2024-12-01 15:03:34.605911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:01.659 [2024-12-01 15:03:34.610165] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x124dd10) 00:23:01.659 [2024-12-01 15:03:34.610192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:14368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.659 [2024-12-01 15:03:34.610203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:01.659 [2024-12-01 15:03:34.613965] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x124dd10) 00:23:01.659 [2024-12-01 15:03:34.613994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:11584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.659 [2024-12-01 15:03:34.614004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:01.659 [2024-12-01 15:03:34.617713] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x124dd10) 00:23:01.659 [2024-12-01 15:03:34.617760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:11008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.659 [2024-12-01 15:03:34.617799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:01.659 [2024-12-01 15:03:34.621279] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x124dd10) 00:23:01.659 [2024-12-01 15:03:34.621308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:4864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.659 [2024-12-01 15:03:34.621318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:01.659 [2024-12-01 15:03:34.624943] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x124dd10) 00:23:01.659 [2024-12-01 15:03:34.624984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:24672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.659 [2024-12-01 15:03:34.624995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:01.659 [2024-12-01 15:03:34.628721] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x124dd10) 00:23:01.659 [2024-12-01 15:03:34.628773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:2816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.659 [2024-12-01 15:03:34.628785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:01.659 [2024-12-01 15:03:34.632065] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x124dd10) 00:23:01.659 [2024-12-01 15:03:34.632094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:15776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.659 [2024-12-01 15:03:34.632105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:01.659 [2024-12-01 15:03:34.635988] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x124dd10) 00:23:01.659 [2024-12-01 15:03:34.636017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:13472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.659 [2024-12-01 15:03:34.636027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:01.659 [2024-12-01 15:03:34.638974] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x124dd10) 00:23:01.659 [2024-12-01 15:03:34.639003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:1984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.659 [2024-12-01 15:03:34.639013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:01.659 [2024-12-01 15:03:34.643128] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x124dd10) 00:23:01.659 [2024-12-01 15:03:34.643157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.659 [2024-12-01 15:03:34.643168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:01.659 [2024-12-01 15:03:34.646007] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x124dd10) 00:23:01.659 [2024-12-01 15:03:34.646035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:14592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.659 [2024-12-01 15:03:34.646045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:01.659 [2024-12-01 15:03:34.649362] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x124dd10) 00:23:01.659 [2024-12-01 15:03:34.649391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:9632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.659 [2024-12-01 15:03:34.649401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:01.659 [2024-12-01 15:03:34.653194] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x124dd10) 00:23:01.659 [2024-12-01 15:03:34.653224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:20256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.659 [2024-12-01 15:03:34.653234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:01.659 [2024-12-01 15:03:34.656581] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x124dd10) 00:23:01.659 [2024-12-01 15:03:34.656609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:9152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.659 [2024-12-01 15:03:34.656620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:01.659 [2024-12-01 15:03:34.660897] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x124dd10) 00:23:01.659 [2024-12-01 15:03:34.660925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:6432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.659 [2024-12-01 15:03:34.660936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:01.659 [2024-12-01 15:03:34.664272] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x124dd10) 00:23:01.659 [2024-12-01 15:03:34.664302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:15712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.659 [2024-12-01 15:03:34.664312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:01.659 [2024-12-01 15:03:34.667762] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x124dd10) 00:23:01.659 [2024-12-01 15:03:34.667789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:13696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.659 [2024-12-01 15:03:34.667800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:01.659 [2024-12-01 15:03:34.671127] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x124dd10) 00:23:01.659 [2024-12-01 15:03:34.671156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:23040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.659 [2024-12-01 15:03:34.671166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:01.659 [2024-12-01 15:03:34.674362] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x124dd10) 00:23:01.659 [2024-12-01 15:03:34.674391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:15360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.659 [2024-12-01 15:03:34.674402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:01.659 [2024-12-01 15:03:34.678244] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x124dd10) 00:23:01.659 [2024-12-01 15:03:34.678273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:18720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.659 [2024-12-01 15:03:34.678284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:01.659 [2024-12-01 15:03:34.682060] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x124dd10) 00:23:01.659 [2024-12-01 15:03:34.682089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:18368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.660 [2024-12-01 15:03:34.682099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:01.660 [2024-12-01 15:03:34.685552] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x124dd10) 00:23:01.660 [2024-12-01 15:03:34.685594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:4416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.660 [2024-12-01 15:03:34.685605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:01.660 [2024-12-01 15:03:34.689335] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x124dd10) 00:23:01.660 [2024-12-01 15:03:34.689363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:14720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.660 [2024-12-01 15:03:34.689374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:01.660 [2024-12-01 15:03:34.692847] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x124dd10) 00:23:01.660 [2024-12-01 15:03:34.692887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:23680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.660 [2024-12-01 15:03:34.692899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:01.660 [2024-12-01 15:03:34.696746] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x124dd10) 00:23:01.660 [2024-12-01 15:03:34.696783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:24832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.660 [2024-12-01 15:03:34.696797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:01.660 [2024-12-01 15:03:34.700498] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x124dd10) 00:23:01.660 [2024-12-01 15:03:34.700539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:2048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.660 [2024-12-01 15:03:34.700550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:01.660 [2024-12-01 15:03:34.704412] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x124dd10) 00:23:01.660 [2024-12-01 15:03:34.704439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:17280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.660 [2024-12-01 15:03:34.704450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:01.660 [2024-12-01 15:03:34.708208] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x124dd10) 00:23:01.660 [2024-12-01 15:03:34.708235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:13536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.660 [2024-12-01 15:03:34.708246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:01.660 [2024-12-01 15:03:34.712119] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x124dd10) 00:23:01.660 [2024-12-01 15:03:34.712146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:12768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.660 [2024-12-01 15:03:34.712156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:01.660 [2024-12-01 15:03:34.715280] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x124dd10) 00:23:01.660 [2024-12-01 15:03:34.715307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:18112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.660 [2024-12-01 15:03:34.715317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:01.660 [2024-12-01 15:03:34.718631] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x124dd10) 00:23:01.660 [2024-12-01 15:03:34.718660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:3584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.660 [2024-12-01 15:03:34.718671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:01.660 [2024-12-01 15:03:34.722174] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x124dd10) 00:23:01.660 [2024-12-01 15:03:34.722202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:23968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.660 [2024-12-01 15:03:34.722213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:01.660 [2024-12-01 15:03:34.726061] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x124dd10) 00:23:01.660 [2024-12-01 15:03:34.726090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:13696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.660 [2024-12-01 15:03:34.726100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:01.660 [2024-12-01 15:03:34.729687] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x124dd10) 00:23:01.660 [2024-12-01 15:03:34.729728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:2016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.660 [2024-12-01 15:03:34.729738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:01.660 [2024-12-01 15:03:34.733035] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x124dd10) 00:23:01.660 [2024-12-01 15:03:34.733063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:7968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.660 [2024-12-01 15:03:34.733074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:01.660 [2024-12-01 15:03:34.736873] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x124dd10) 00:23:01.660 [2024-12-01 15:03:34.736902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:22592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.660 [2024-12-01 15:03:34.736912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:01.660 [2024-12-01 15:03:34.740197] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x124dd10) 00:23:01.660 [2024-12-01 15:03:34.740227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:23744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.660 [2024-12-01 15:03:34.740238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:01.660 [2024-12-01 15:03:34.743998] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x124dd10) 00:23:01.660 [2024-12-01 15:03:34.744039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:12256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.660 [2024-12-01 15:03:34.744050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:01.660 [2024-12-01 15:03:34.747296] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x124dd10) 00:23:01.660 [2024-12-01 15:03:34.747325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:20064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.660 [2024-12-01 15:03:34.747336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:01.660 [2024-12-01 15:03:34.750823] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x124dd10) 00:23:01.660 [2024-12-01 15:03:34.750864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:16416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.660 [2024-12-01 15:03:34.750875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:01.660 [2024-12-01 15:03:34.755036] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x124dd10) 00:23:01.660 [2024-12-01 15:03:34.755065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:9920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.660 [2024-12-01 15:03:34.755075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:01.660 [2024-12-01 15:03:34.758344] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x124dd10) 00:23:01.660 [2024-12-01 15:03:34.758374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:8480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.660 [2024-12-01 15:03:34.758385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:01.660 [2024-12-01 15:03:34.761957] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x124dd10) 00:23:01.661 [2024-12-01 15:03:34.761985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:8736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.661 [2024-12-01 15:03:34.761996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:01.661 [2024-12-01 15:03:34.765592] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x124dd10) 00:23:01.661 [2024-12-01 15:03:34.765651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:2784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.661 [2024-12-01 15:03:34.765663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:01.661 [2024-12-01 15:03:34.768668] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x124dd10) 00:23:01.661 [2024-12-01 15:03:34.768699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:10304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.661 [2024-12-01 15:03:34.768710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:01.921 [2024-12-01 15:03:34.772555] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x124dd10) 00:23:01.921 [2024-12-01 15:03:34.772586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:13344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.921 [2024-12-01 15:03:34.772597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:01.921 [2024-12-01 15:03:34.776686] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x124dd10) 00:23:01.921 [2024-12-01 15:03:34.776716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:12800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.921 [2024-12-01 15:03:34.776728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:01.921 [2024-12-01 15:03:34.780346] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x124dd10) 00:23:01.921 [2024-12-01 15:03:34.780378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:10176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.921 [2024-12-01 15:03:34.780389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:01.921 [2024-12-01 15:03:34.783822] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x124dd10) 00:23:01.921 [2024-12-01 15:03:34.783870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:5152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.921 [2024-12-01 15:03:34.783881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:01.921 [2024-12-01 15:03:34.788044] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x124dd10) 00:23:01.921 [2024-12-01 15:03:34.788073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:18240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.921 [2024-12-01 15:03:34.788083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:01.921 [2024-12-01 15:03:34.792012] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x124dd10) 00:23:01.921 [2024-12-01 15:03:34.792041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:22560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.921 [2024-12-01 15:03:34.792052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:01.921 [2024-12-01 15:03:34.795768] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x124dd10) 00:23:01.921 [2024-12-01 15:03:34.795809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:21568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.921 [2024-12-01 15:03:34.795819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:01.921 [2024-12-01 15:03:34.799265] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x124dd10) 00:23:01.921 [2024-12-01 15:03:34.799306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:13184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.921 [2024-12-01 15:03:34.799317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:01.921 [2024-12-01 15:03:34.803693] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x124dd10) 00:23:01.921 [2024-12-01 15:03:34.803733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.921 [2024-12-01 15:03:34.803744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:01.921 [2024-12-01 15:03:34.806848] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x124dd10) 00:23:01.921 [2024-12-01 15:03:34.806889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:2144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.921 [2024-12-01 15:03:34.806900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:01.921 [2024-12-01 15:03:34.810102] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x124dd10) 00:23:01.921 [2024-12-01 15:03:34.810137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.921 [2024-12-01 15:03:34.810158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:01.921 [2024-12-01 15:03:34.814380] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x124dd10) 00:23:01.921 [2024-12-01 15:03:34.814407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:5376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.921 [2024-12-01 15:03:34.814419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:01.921 [2024-12-01 15:03:34.818237] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x124dd10) 00:23:01.921 [2024-12-01 15:03:34.818266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.921 [2024-12-01 15:03:34.818277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:01.921 [2024-12-01 15:03:34.821707] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x124dd10) 00:23:01.921 [2024-12-01 15:03:34.821759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:7360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.921 [2024-12-01 15:03:34.821781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:01.921 [2024-12-01 15:03:34.825955] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x124dd10) 00:23:01.921 [2024-12-01 15:03:34.825996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.921 [2024-12-01 15:03:34.826007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:01.921 [2024-12-01 15:03:34.829297] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x124dd10) 00:23:01.921 [2024-12-01 15:03:34.829326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:20992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.921 [2024-12-01 15:03:34.829337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:01.921 [2024-12-01 15:03:34.833109] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x124dd10) 00:23:01.921 [2024-12-01 15:03:34.833137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:17344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.921 [2024-12-01 15:03:34.833148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:01.921 [2024-12-01 15:03:34.836696] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x124dd10) 00:23:01.921 [2024-12-01 15:03:34.836725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:8480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.921 [2024-12-01 15:03:34.836736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:01.921 [2024-12-01 15:03:34.839981] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x124dd10) 00:23:01.921 [2024-12-01 15:03:34.840010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:1984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.921 [2024-12-01 15:03:34.840021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:01.921 [2024-12-01 15:03:34.844076] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x124dd10) 00:23:01.921 [2024-12-01 15:03:34.844118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:9536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.921 [2024-12-01 15:03:34.844129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:01.921 [2024-12-01 15:03:34.847127] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x124dd10) 00:23:01.921 [2024-12-01 15:03:34.847155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:7104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.921 [2024-12-01 15:03:34.847168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:01.921 [2024-12-01 15:03:34.851477] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x124dd10) 00:23:01.921 [2024-12-01 15:03:34.851505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:22368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.921 [2024-12-01 15:03:34.851516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:01.921 [2024-12-01 15:03:34.855420] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x124dd10) 00:23:01.921 [2024-12-01 15:03:34.855446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:11488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.921 [2024-12-01 15:03:34.855456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:01.921 [2024-12-01 15:03:34.859257] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x124dd10) 00:23:01.921 [2024-12-01 15:03:34.859286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.921 [2024-12-01 15:03:34.859296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:01.922 [2024-12-01 15:03:34.863152] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x124dd10) 00:23:01.922 [2024-12-01 15:03:34.863181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:13888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.922 [2024-12-01 15:03:34.863192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:01.922 [2024-12-01 15:03:34.866864] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x124dd10) 00:23:01.922 [2024-12-01 15:03:34.866893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:22400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.922 [2024-12-01 15:03:34.866903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:01.922 [2024-12-01 15:03:34.870320] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x124dd10) 00:23:01.922 [2024-12-01 15:03:34.870349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:22272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.922 [2024-12-01 15:03:34.870359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:01.922 [2024-12-01 15:03:34.874085] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x124dd10) 00:23:01.922 [2024-12-01 15:03:34.874115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:11296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.922 [2024-12-01 15:03:34.874126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:01.922 [2024-12-01 15:03:34.878180] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x124dd10) 00:23:01.922 [2024-12-01 15:03:34.878209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:20864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.922 [2024-12-01 15:03:34.878223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:01.922 [2024-12-01 15:03:34.882299] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x124dd10) 00:23:01.922 [2024-12-01 15:03:34.882328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:24480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.922 [2024-12-01 15:03:34.882339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:01.922 [2024-12-01 15:03:34.885076] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x124dd10) 00:23:01.922 [2024-12-01 15:03:34.885115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:12128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.922 [2024-12-01 15:03:34.885125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:01.922 [2024-12-01 15:03:34.889760] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x124dd10) 00:23:01.922 [2024-12-01 15:03:34.889792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:22432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.922 [2024-12-01 15:03:34.889808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:01.922 [2024-12-01 15:03:34.892919] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x124dd10) 00:23:01.922 [2024-12-01 15:03:34.892960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.922 [2024-12-01 15:03:34.892970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:01.922 [2024-12-01 15:03:34.897361] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x124dd10) 00:23:01.922 [2024-12-01 15:03:34.897402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:16192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.922 [2024-12-01 15:03:34.897413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:01.922 [2024-12-01 15:03:34.901481] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x124dd10) 00:23:01.922 [2024-12-01 15:03:34.901510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:22912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.922 [2024-12-01 15:03:34.901521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:01.922 [2024-12-01 15:03:34.905828] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x124dd10) 00:23:01.922 [2024-12-01 15:03:34.905886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:4384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.922 [2024-12-01 15:03:34.905897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:01.922 [2024-12-01 15:03:34.909053] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x124dd10) 00:23:01.922 [2024-12-01 15:03:34.909081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.922 [2024-12-01 15:03:34.909092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:01.922 [2024-12-01 15:03:34.912998] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x124dd10) 00:23:01.922 [2024-12-01 15:03:34.913026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:15520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.922 [2024-12-01 15:03:34.913036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:01.922 [2024-12-01 15:03:34.916909] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x124dd10) 00:23:01.922 [2024-12-01 15:03:34.916952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:18080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.922 [2024-12-01 15:03:34.916964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:01.922 [2024-12-01 15:03:34.920659] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x124dd10) 00:23:01.922 [2024-12-01 15:03:34.920702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:2400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.922 [2024-12-01 15:03:34.920713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:01.922 [2024-12-01 15:03:34.924804] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x124dd10) 00:23:01.922 [2024-12-01 15:03:34.924827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.922 [2024-12-01 15:03:34.924836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:01.922 [2024-12-01 15:03:34.928781] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x124dd10) 00:23:01.922 [2024-12-01 15:03:34.928821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.922 [2024-12-01 15:03:34.928831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:01.922 [2024-12-01 15:03:34.932559] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x124dd10) 00:23:01.922 [2024-12-01 15:03:34.932587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:12896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.922 [2024-12-01 15:03:34.932597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:01.922 [2024-12-01 15:03:34.936501] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x124dd10) 00:23:01.922 [2024-12-01 15:03:34.936528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:20384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.922 [2024-12-01 15:03:34.936538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:01.922 [2024-12-01 15:03:34.939452] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x124dd10) 00:23:01.922 [2024-12-01 15:03:34.939479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.922 [2024-12-01 15:03:34.939489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:01.922 [2024-12-01 15:03:34.943295] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x124dd10) 00:23:01.922 [2024-12-01 15:03:34.943323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:4704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.922 [2024-12-01 15:03:34.943333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:01.922 [2024-12-01 15:03:34.946883] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x124dd10) 00:23:01.922 [2024-12-01 15:03:34.946912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:10752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.922 [2024-12-01 15:03:34.946923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:01.922 [2024-12-01 15:03:34.950140] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x124dd10) 00:23:01.922 [2024-12-01 15:03:34.950168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:3232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.922 [2024-12-01 15:03:34.950179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:01.922 [2024-12-01 15:03:34.953887] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x124dd10) 00:23:01.922 [2024-12-01 15:03:34.953916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:1600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.922 [2024-12-01 15:03:34.953928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:01.922 [2024-12-01 15:03:34.957222] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x124dd10) 00:23:01.922 [2024-12-01 15:03:34.957251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:9344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.922 [2024-12-01 15:03:34.957261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:01.922 [2024-12-01 15:03:34.960654] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x124dd10) 00:23:01.922 [2024-12-01 15:03:34.960684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:7424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.923 [2024-12-01 15:03:34.960694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:01.923 [2024-12-01 15:03:34.964035] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x124dd10) 00:23:01.923 [2024-12-01 15:03:34.964064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:8032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.923 [2024-12-01 15:03:34.964074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:01.923 [2024-12-01 15:03:34.967766] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x124dd10) 00:23:01.923 [2024-12-01 15:03:34.967793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:20800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.923 [2024-12-01 15:03:34.967804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:01.923 [2024-12-01 15:03:34.971145] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x124dd10) 00:23:01.923 [2024-12-01 15:03:34.971174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:7200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.923 [2024-12-01 15:03:34.971185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:01.923 [2024-12-01 15:03:34.974719] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x124dd10) 00:23:01.923 [2024-12-01 15:03:34.974748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:22816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.923 [2024-12-01 15:03:34.974771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:01.923 [2024-12-01 15:03:34.978469] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x124dd10) 00:23:01.923 [2024-12-01 15:03:34.978498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:8384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.923 [2024-12-01 15:03:34.978508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:01.923 [2024-12-01 15:03:34.982203] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x124dd10) 00:23:01.923 [2024-12-01 15:03:34.982232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:12768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.923 [2024-12-01 15:03:34.982243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:01.923 [2024-12-01 15:03:34.985437] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x124dd10) 00:23:01.923 [2024-12-01 15:03:34.985466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.923 [2024-12-01 15:03:34.985477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:01.923 [2024-12-01 15:03:34.989078] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x124dd10) 00:23:01.923 [2024-12-01 15:03:34.989107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:12640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.923 [2024-12-01 15:03:34.989118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:01.923 [2024-12-01 15:03:34.992690] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x124dd10) 00:23:01.923 [2024-12-01 15:03:34.992719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:10208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.923 [2024-12-01 15:03:34.992730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:01.923 [2024-12-01 15:03:34.996100] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x124dd10) 00:23:01.923 [2024-12-01 15:03:34.996129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:3872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.923 [2024-12-01 15:03:34.996140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:01.923 [2024-12-01 15:03:34.999535] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x124dd10) 00:23:01.923 [2024-12-01 15:03:34.999565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:14656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.923 [2024-12-01 15:03:34.999575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:01.923 [2024-12-01 15:03:35.003097] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x124dd10) 00:23:01.923 [2024-12-01 15:03:35.003126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:18272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.923 [2024-12-01 15:03:35.003137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:01.923 [2024-12-01 15:03:35.007025] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x124dd10) 00:23:01.923 [2024-12-01 15:03:35.007054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:22880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.923 [2024-12-01 15:03:35.007064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:01.923 [2024-12-01 15:03:35.010412] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x124dd10) 00:23:01.923 [2024-12-01 15:03:35.010442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:2272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.923 [2024-12-01 15:03:35.010453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:01.923 [2024-12-01 15:03:35.014206] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x124dd10) 00:23:01.923 [2024-12-01 15:03:35.014234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:18656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.923 [2024-12-01 15:03:35.014244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:01.923 [2024-12-01 15:03:35.018156] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x124dd10) 00:23:01.923 [2024-12-01 15:03:35.018186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:20096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.923 [2024-12-01 15:03:35.018196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:01.923 [2024-12-01 15:03:35.022465] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x124dd10) 00:23:01.923 [2024-12-01 15:03:35.022493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:19552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.923 [2024-12-01 15:03:35.022504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:01.923 [2024-12-01 15:03:35.025659] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x124dd10) 00:23:01.923 [2024-12-01 15:03:35.025701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.923 [2024-12-01 15:03:35.025712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:01.923 [2024-12-01 15:03:35.029017] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x124dd10) 00:23:01.923 [2024-12-01 15:03:35.029053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:13728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.923 [2024-12-01 15:03:35.029071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:02.183 [2024-12-01 15:03:35.032970] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x124dd10) 00:23:02.183 [2024-12-01 15:03:35.033005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:22656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.183 [2024-12-01 15:03:35.033023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:02.183 [2024-12-01 15:03:35.036789] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x124dd10) 00:23:02.183 [2024-12-01 15:03:35.036831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:18880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.183 [2024-12-01 15:03:35.036842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:02.183 [2024-12-01 15:03:35.040226] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x124dd10) 00:23:02.183 [2024-12-01 15:03:35.040282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.183 [2024-12-01 15:03:35.040296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:02.183 [2024-12-01 15:03:35.044107] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x124dd10) 00:23:02.183 [2024-12-01 15:03:35.044137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:24448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.183 [2024-12-01 15:03:35.044147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:02.183 [2024-12-01 15:03:35.047304] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x124dd10) 00:23:02.183 [2024-12-01 15:03:35.047333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:1120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.183 [2024-12-01 15:03:35.047344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:02.183 [2024-12-01 15:03:35.050884] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x124dd10) 00:23:02.183 [2024-12-01 15:03:35.050914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:21664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.183 [2024-12-01 15:03:35.050925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:02.183 [2024-12-01 15:03:35.054679] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x124dd10) 00:23:02.183 [2024-12-01 15:03:35.054707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:5344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.183 [2024-12-01 15:03:35.054718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:02.183 [2024-12-01 15:03:35.058012] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x124dd10) 00:23:02.183 [2024-12-01 15:03:35.058053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:5216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.183 [2024-12-01 15:03:35.058064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:02.183 [2024-12-01 15:03:35.062059] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x124dd10) 00:23:02.184 [2024-12-01 15:03:35.062087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:2976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.184 [2024-12-01 15:03:35.062098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:02.184 [2024-12-01 15:03:35.065257] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x124dd10) 00:23:02.184 [2024-12-01 15:03:35.065286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:2464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.184 [2024-12-01 15:03:35.065297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:02.184 [2024-12-01 15:03:35.068471] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x124dd10) 00:23:02.184 [2024-12-01 15:03:35.068500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:10752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.184 [2024-12-01 15:03:35.068510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:02.184 [2024-12-01 15:03:35.071722] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x124dd10) 00:23:02.184 [2024-12-01 15:03:35.071760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:6208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.184 [2024-12-01 15:03:35.071778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:02.184 [2024-12-01 15:03:35.076129] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x124dd10) 00:23:02.184 [2024-12-01 15:03:35.076158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:6656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.184 [2024-12-01 15:03:35.076168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:02.184 [2024-12-01 15:03:35.079630] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x124dd10) 00:23:02.184 [2024-12-01 15:03:35.079659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:11232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.184 [2024-12-01 15:03:35.079670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:02.184 [2024-12-01 15:03:35.083218] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x124dd10) 00:23:02.184 [2024-12-01 15:03:35.083246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:1984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.184 [2024-12-01 15:03:35.083257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:02.184 [2024-12-01 15:03:35.087280] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x124dd10) 00:23:02.184 [2024-12-01 15:03:35.087308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:7488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.184 [2024-12-01 15:03:35.087318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:02.184 [2024-12-01 15:03:35.091245] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x124dd10) 00:23:02.184 [2024-12-01 15:03:35.091274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:15488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.184 [2024-12-01 15:03:35.091286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:02.184 [2024-12-01 15:03:35.094885] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x124dd10) 00:23:02.184 [2024-12-01 15:03:35.094913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:5216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.184 [2024-12-01 15:03:35.094923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:02.184 [2024-12-01 15:03:35.098190] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x124dd10) 00:23:02.184 [2024-12-01 15:03:35.098219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:21280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.184 [2024-12-01 15:03:35.098230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:02.184 [2024-12-01 15:03:35.101784] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x124dd10) 00:23:02.184 [2024-12-01 15:03:35.101813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:9984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.184 [2024-12-01 15:03:35.101823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:02.184 [2024-12-01 15:03:35.104790] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x124dd10) 00:23:02.184 [2024-12-01 15:03:35.104818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:24320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.184 [2024-12-01 15:03:35.104828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:02.184 [2024-12-01 15:03:35.108414] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x124dd10) 00:23:02.184 [2024-12-01 15:03:35.108442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:7648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.184 [2024-12-01 15:03:35.108452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:02.184 [2024-12-01 15:03:35.112787] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x124dd10) 00:23:02.184 [2024-12-01 15:03:35.112827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:15904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.184 [2024-12-01 15:03:35.112838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:02.184 [2024-12-01 15:03:35.116626] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x124dd10) 00:23:02.184 [2024-12-01 15:03:35.116654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:23424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.184 [2024-12-01 15:03:35.116665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:02.184 [2024-12-01 15:03:35.120871] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x124dd10) 00:23:02.184 [2024-12-01 15:03:35.120898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.184 [2024-12-01 15:03:35.120909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:02.184 [2024-12-01 15:03:35.124392] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x124dd10) 00:23:02.184 [2024-12-01 15:03:35.124421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:3424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.184 [2024-12-01 15:03:35.124432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:02.184 [2024-12-01 15:03:35.128618] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x124dd10) 00:23:02.184 [2024-12-01 15:03:35.128647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.184 [2024-12-01 15:03:35.128658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:02.184 [2024-12-01 15:03:35.132824] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x124dd10) 00:23:02.184 [2024-12-01 15:03:35.132864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:22240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.184 [2024-12-01 15:03:35.132875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:02.184 [2024-12-01 15:03:35.137056] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x124dd10) 00:23:02.184 [2024-12-01 15:03:35.137084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:23328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.184 [2024-12-01 15:03:35.137095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:02.184 [2024-12-01 15:03:35.140444] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x124dd10) 00:23:02.184 [2024-12-01 15:03:35.140474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:11776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.184 [2024-12-01 15:03:35.140486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:02.184 [2024-12-01 15:03:35.143849] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x124dd10) 00:23:02.184 [2024-12-01 15:03:35.143878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:19264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.184 [2024-12-01 15:03:35.143888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:02.184 [2024-12-01 15:03:35.147580] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x124dd10) 00:23:02.184 [2024-12-01 15:03:35.147609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:4832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.184 [2024-12-01 15:03:35.147620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:02.184 [2024-12-01 15:03:35.150797] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x124dd10) 00:23:02.184 [2024-12-01 15:03:35.150820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.184 [2024-12-01 15:03:35.150831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:02.184 [2024-12-01 15:03:35.154449] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x124dd10) 00:23:02.184 [2024-12-01 15:03:35.154478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:15488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.184 [2024-12-01 15:03:35.154489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:02.184 [2024-12-01 15:03:35.158156] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x124dd10) 00:23:02.184 [2024-12-01 15:03:35.158184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:5184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.184 [2024-12-01 15:03:35.158195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:02.184 [2024-12-01 15:03:35.161657] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x124dd10) 00:23:02.185 [2024-12-01 15:03:35.161685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:12192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.185 [2024-12-01 15:03:35.161696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:02.185 [2024-12-01 15:03:35.165978] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x124dd10) 00:23:02.185 [2024-12-01 15:03:35.166004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:9600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.185 [2024-12-01 15:03:35.166015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:02.185 [2024-12-01 15:03:35.169493] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x124dd10) 00:23:02.185 [2024-12-01 15:03:35.169520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:18400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.185 [2024-12-01 15:03:35.169530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:02.185 [2024-12-01 15:03:35.173308] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x124dd10) 00:23:02.185 [2024-12-01 15:03:35.173335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:13984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.185 [2024-12-01 15:03:35.173346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:02.185 [2024-12-01 15:03:35.177226] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x124dd10) 00:23:02.185 [2024-12-01 15:03:35.177254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:2752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.185 [2024-12-01 15:03:35.177264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:02.185 [2024-12-01 15:03:35.181243] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x124dd10) 00:23:02.185 [2024-12-01 15:03:35.181270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:13472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.185 [2024-12-01 15:03:35.181282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:02.185 [2024-12-01 15:03:35.184537] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x124dd10) 00:23:02.185 [2024-12-01 15:03:35.184566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:3328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.185 [2024-12-01 15:03:35.184576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:02.185 [2024-12-01 15:03:35.188239] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x124dd10) 00:23:02.185 [2024-12-01 15:03:35.188268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:24352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.185 [2024-12-01 15:03:35.188278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:02.185 [2024-12-01 15:03:35.191728] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x124dd10) 00:23:02.185 [2024-12-01 15:03:35.191769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:20608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.185 [2024-12-01 15:03:35.191781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:02.185 [2024-12-01 15:03:35.195851] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x124dd10) 00:23:02.185 [2024-12-01 15:03:35.195880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:7680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.185 [2024-12-01 15:03:35.195890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:02.185 [2024-12-01 15:03:35.199357] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x124dd10) 00:23:02.185 [2024-12-01 15:03:35.199386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:1728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.185 [2024-12-01 15:03:35.199397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:02.185 [2024-12-01 15:03:35.203277] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x124dd10) 00:23:02.185 [2024-12-01 15:03:35.203306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:13280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.185 [2024-12-01 15:03:35.203316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:02.185 [2024-12-01 15:03:35.207003] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x124dd10) 00:23:02.185 [2024-12-01 15:03:35.207031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:6816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.185 [2024-12-01 15:03:35.207041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:02.185 [2024-12-01 15:03:35.210791] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x124dd10) 00:23:02.185 [2024-12-01 15:03:35.210817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:18272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.185 [2024-12-01 15:03:35.210828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:02.185 [2024-12-01 15:03:35.214568] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x124dd10) 00:23:02.185 [2024-12-01 15:03:35.214596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:4672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.185 [2024-12-01 15:03:35.214606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:02.185 [2024-12-01 15:03:35.217642] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x124dd10) 00:23:02.185 [2024-12-01 15:03:35.217683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.185 [2024-12-01 15:03:35.217693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:02.185 [2024-12-01 15:03:35.221008] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x124dd10) 00:23:02.185 [2024-12-01 15:03:35.221037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:6464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.185 [2024-12-01 15:03:35.221048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:02.185 [2024-12-01 15:03:35.224413] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x124dd10) 00:23:02.185 [2024-12-01 15:03:35.224443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:18432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.185 [2024-12-01 15:03:35.224454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:02.185 [2024-12-01 15:03:35.227995] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x124dd10) 00:23:02.185 [2024-12-01 15:03:35.228037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.185 [2024-12-01 15:03:35.228047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:02.185 [2024-12-01 15:03:35.231588] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x124dd10) 00:23:02.185 [2024-12-01 15:03:35.231617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:25216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.185 [2024-12-01 15:03:35.231627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:02.185 [2024-12-01 15:03:35.235527] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x124dd10) 00:23:02.185 [2024-12-01 15:03:35.235555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:6688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.185 [2024-12-01 15:03:35.235566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:02.185 [2024-12-01 15:03:35.238875] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x124dd10) 00:23:02.185 [2024-12-01 15:03:35.238904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.185 [2024-12-01 15:03:35.238915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:02.185 [2024-12-01 15:03:35.242601] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x124dd10) 00:23:02.185 [2024-12-01 15:03:35.242630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.185 [2024-12-01 15:03:35.242640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:02.185 [2024-12-01 15:03:35.245725] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x124dd10) 00:23:02.185 [2024-12-01 15:03:35.245765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:18208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.185 [2024-12-01 15:03:35.245779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:02.185 [2024-12-01 15:03:35.249367] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x124dd10) 00:23:02.185 [2024-12-01 15:03:35.249396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:7488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.185 [2024-12-01 15:03:35.249407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:02.185 [2024-12-01 15:03:35.252521] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x124dd10) 00:23:02.185 [2024-12-01 15:03:35.252550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.185 [2024-12-01 15:03:35.252562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:02.185 [2024-12-01 15:03:35.255939] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x124dd10) 00:23:02.185 [2024-12-01 15:03:35.255967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.185 [2024-12-01 15:03:35.255978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:02.185 [2024-12-01 15:03:35.259778] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x124dd10) 00:23:02.186 [2024-12-01 15:03:35.259807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:15872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.186 [2024-12-01 15:03:35.259818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:02.186 [2024-12-01 15:03:35.262859] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x124dd10) 00:23:02.186 [2024-12-01 15:03:35.262900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:11424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.186 [2024-12-01 15:03:35.262911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:02.186 [2024-12-01 15:03:35.266276] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x124dd10) 00:23:02.186 [2024-12-01 15:03:35.266303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:1568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.186 [2024-12-01 15:03:35.266313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:02.186 [2024-12-01 15:03:35.270241] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x124dd10) 00:23:02.186 [2024-12-01 15:03:35.270268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:2464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.186 [2024-12-01 15:03:35.270279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:02.186 [2024-12-01 15:03:35.274414] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x124dd10) 00:23:02.186 [2024-12-01 15:03:35.274442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:20288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.186 [2024-12-01 15:03:35.274453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:02.186 [2024-12-01 15:03:35.278247] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x124dd10) 00:23:02.186 [2024-12-01 15:03:35.278276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:9216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.186 [2024-12-01 15:03:35.278287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:02.186 [2024-12-01 15:03:35.282531] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x124dd10) 00:23:02.186 [2024-12-01 15:03:35.282560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:12896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.186 [2024-12-01 15:03:35.282571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:02.186 [2024-12-01 15:03:35.285909] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x124dd10) 00:23:02.186 [2024-12-01 15:03:35.285939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:18816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.186 [2024-12-01 15:03:35.285949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:02.186 [2024-12-01 15:03:35.289400] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x124dd10) 00:23:02.186 [2024-12-01 15:03:35.289436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:13760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.186 [2024-12-01 15:03:35.289447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:02.186 [2024-12-01 15:03:35.293437] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x124dd10) 00:23:02.186 [2024-12-01 15:03:35.293487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:2432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.186 [2024-12-01 15:03:35.293505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:02.446 [2024-12-01 15:03:35.296735] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x124dd10) 00:23:02.446 [2024-12-01 15:03:35.296778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:18688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.446 [2024-12-01 15:03:35.296790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:02.446 [2024-12-01 15:03:35.300621] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x124dd10) 00:23:02.446 [2024-12-01 15:03:35.300651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:11968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.446 [2024-12-01 15:03:35.300662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:02.446 [2024-12-01 15:03:35.304334] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x124dd10) 00:23:02.446 [2024-12-01 15:03:35.304366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:18848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.446 [2024-12-01 15:03:35.304377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:02.446 [2024-12-01 15:03:35.308494] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x124dd10) 00:23:02.446 [2024-12-01 15:03:35.308524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:4384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.446 [2024-12-01 15:03:35.308535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:02.446 [2024-12-01 15:03:35.311891] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x124dd10) 00:23:02.446 [2024-12-01 15:03:35.311920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.446 [2024-12-01 15:03:35.311931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:02.446 [2024-12-01 15:03:35.315369] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x124dd10) 00:23:02.446 [2024-12-01 15:03:35.315399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:23584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.446 [2024-12-01 15:03:35.315409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:02.446 [2024-12-01 15:03:35.318586] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x124dd10) 00:23:02.446 [2024-12-01 15:03:35.318617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.446 [2024-12-01 15:03:35.318627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:02.446 [2024-12-01 15:03:35.323034] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x124dd10) 00:23:02.446 [2024-12-01 15:03:35.323063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.446 [2024-12-01 15:03:35.323073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:02.446 [2024-12-01 15:03:35.326052] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x124dd10) 00:23:02.446 [2024-12-01 15:03:35.326081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:5536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.446 [2024-12-01 15:03:35.326091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:02.446 [2024-12-01 15:03:35.329681] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x124dd10) 00:23:02.446 [2024-12-01 15:03:35.329711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:9888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.446 [2024-12-01 15:03:35.329722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:02.446 [2024-12-01 15:03:35.333023] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x124dd10) 00:23:02.446 [2024-12-01 15:03:35.333051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:13696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.446 [2024-12-01 15:03:35.333062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:02.446 [2024-12-01 15:03:35.336506] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x124dd10) 00:23:02.446 [2024-12-01 15:03:35.336535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.446 [2024-12-01 15:03:35.336546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:02.446 [2024-12-01 15:03:35.340149] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x124dd10) 00:23:02.446 [2024-12-01 15:03:35.340177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:22624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.446 [2024-12-01 15:03:35.340188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:02.446 [2024-12-01 15:03:35.343872] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x124dd10) 00:23:02.446 [2024-12-01 15:03:35.343900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:18752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.446 [2024-12-01 15:03:35.343911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:02.446 [2024-12-01 15:03:35.347856] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x124dd10) 00:23:02.446 [2024-12-01 15:03:35.347885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:22592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.446 [2024-12-01 15:03:35.347895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:02.446 [2024-12-01 15:03:35.351188] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x124dd10) 00:23:02.446 [2024-12-01 15:03:35.351217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.446 [2024-12-01 15:03:35.351227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:02.446 [2024-12-01 15:03:35.354949] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x124dd10) 00:23:02.446 [2024-12-01 15:03:35.354978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:13184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.446 [2024-12-01 15:03:35.354989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:02.446 [2024-12-01 15:03:35.358602] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x124dd10) 00:23:02.446 [2024-12-01 15:03:35.358632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:22752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.446 [2024-12-01 15:03:35.358642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:02.446 [2024-12-01 15:03:35.362071] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x124dd10) 00:23:02.446 [2024-12-01 15:03:35.362099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.446 [2024-12-01 15:03:35.362110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:02.446 [2024-12-01 15:03:35.364994] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x124dd10) 00:23:02.446 [2024-12-01 15:03:35.365022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:12288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.446 [2024-12-01 15:03:35.365032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:02.446 [2024-12-01 15:03:35.368991] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x124dd10) 00:23:02.446 [2024-12-01 15:03:35.369020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:20000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.446 [2024-12-01 15:03:35.369030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:02.446 [2024-12-01 15:03:35.372635] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x124dd10) 00:23:02.446 [2024-12-01 15:03:35.372677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:22688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.446 [2024-12-01 15:03:35.372687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:02.446 [2024-12-01 15:03:35.376086] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x124dd10) 00:23:02.446 [2024-12-01 15:03:35.376116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:8736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.446 [2024-12-01 15:03:35.376126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:02.446 [2024-12-01 15:03:35.379481] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x124dd10) 00:23:02.446 [2024-12-01 15:03:35.379510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:12960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.446 [2024-12-01 15:03:35.379521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:02.446 [2024-12-01 15:03:35.382848] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x124dd10) 00:23:02.446 [2024-12-01 15:03:35.382889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:10752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.446 [2024-12-01 15:03:35.382900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:02.446 [2024-12-01 15:03:35.386355] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x124dd10) 00:23:02.447 [2024-12-01 15:03:35.386385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:11552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.447 [2024-12-01 15:03:35.386396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:02.447 [2024-12-01 15:03:35.390146] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x124dd10) 00:23:02.447 [2024-12-01 15:03:35.390174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:2048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.447 [2024-12-01 15:03:35.390185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:02.447 [2024-12-01 15:03:35.394158] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x124dd10) 00:23:02.447 [2024-12-01 15:03:35.394187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:17632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.447 [2024-12-01 15:03:35.394198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:02.447 [2024-12-01 15:03:35.397854] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x124dd10) 00:23:02.447 [2024-12-01 15:03:35.397884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:21088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.447 [2024-12-01 15:03:35.397895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:02.447 [2024-12-01 15:03:35.401596] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x124dd10) 00:23:02.447 [2024-12-01 15:03:35.401624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:18848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.447 [2024-12-01 15:03:35.401635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:02.447 [2024-12-01 15:03:35.405284] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x124dd10) 00:23:02.447 [2024-12-01 15:03:35.405313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:16960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.447 [2024-12-01 15:03:35.405324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:02.447 [2024-12-01 15:03:35.409401] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x124dd10) 00:23:02.447 [2024-12-01 15:03:35.409437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:15520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.447 [2024-12-01 15:03:35.409455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:02.447 [2024-12-01 15:03:35.413541] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x124dd10) 00:23:02.447 [2024-12-01 15:03:35.413569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:23744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.447 [2024-12-01 15:03:35.413580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:02.447 [2024-12-01 15:03:35.417699] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x124dd10) 00:23:02.447 [2024-12-01 15:03:35.417726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:16064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.447 [2024-12-01 15:03:35.417736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:02.447 [2024-12-01 15:03:35.421570] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x124dd10) 00:23:02.447 [2024-12-01 15:03:35.421597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.447 [2024-12-01 15:03:35.421610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:02.447 [2024-12-01 15:03:35.425314] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x124dd10) 00:23:02.447 [2024-12-01 15:03:35.425341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:7648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.447 [2024-12-01 15:03:35.425351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:02.447 [2024-12-01 15:03:35.429273] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x124dd10) 00:23:02.447 [2024-12-01 15:03:35.429300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:12448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.447 [2024-12-01 15:03:35.429310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:02.447 [2024-12-01 15:03:35.433231] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x124dd10) 00:23:02.447 [2024-12-01 15:03:35.433258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:11200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.447 [2024-12-01 15:03:35.433269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:02.447 [2024-12-01 15:03:35.437699] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x124dd10) 00:23:02.447 [2024-12-01 15:03:35.437728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.447 [2024-12-01 15:03:35.437742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:02.447 [2024-12-01 15:03:35.441407] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x124dd10) 00:23:02.447 [2024-12-01 15:03:35.441441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:18112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.447 [2024-12-01 15:03:35.441460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:02.447 [2024-12-01 15:03:35.445672] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x124dd10) 00:23:02.447 [2024-12-01 15:03:35.445713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:1856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.447 [2024-12-01 15:03:35.445723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:02.447 [2024-12-01 15:03:35.449378] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x124dd10) 00:23:02.447 [2024-12-01 15:03:35.449405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:4448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.447 [2024-12-01 15:03:35.449415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:02.447 [2024-12-01 15:03:35.453306] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x124dd10) 00:23:02.447 [2024-12-01 15:03:35.453331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:3776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.447 [2024-12-01 15:03:35.453341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:02.447 [2024-12-01 15:03:35.455955] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x124dd10) 00:23:02.447 [2024-12-01 15:03:35.455982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:2784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.447 [2024-12-01 15:03:35.455993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:02.447 [2024-12-01 15:03:35.460163] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x124dd10) 00:23:02.447 [2024-12-01 15:03:35.460193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:12320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.447 [2024-12-01 15:03:35.460216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:02.447 [2024-12-01 15:03:35.463986] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x124dd10) 00:23:02.447 [2024-12-01 15:03:35.464027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:1664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.447 [2024-12-01 15:03:35.464037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:02.447 [2024-12-01 15:03:35.468224] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x124dd10) 00:23:02.447 [2024-12-01 15:03:35.468261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.447 [2024-12-01 15:03:35.468276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:02.447 [2024-12-01 15:03:35.472360] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x124dd10) 00:23:02.447 [2024-12-01 15:03:35.472401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:15456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.447 [2024-12-01 15:03:35.472413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:02.447 [2024-12-01 15:03:35.476129] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x124dd10) 00:23:02.447 [2024-12-01 15:03:35.476169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:18656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.447 [2024-12-01 15:03:35.476180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:02.447 [2024-12-01 15:03:35.480064] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x124dd10) 00:23:02.447 [2024-12-01 15:03:35.480105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:13376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.447 [2024-12-01 15:03:35.480115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:02.447 [2024-12-01 15:03:35.484476] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x124dd10) 00:23:02.447 [2024-12-01 15:03:35.484503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.447 [2024-12-01 15:03:35.484514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:02.447 [2024-12-01 15:03:35.488486] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x124dd10) 00:23:02.447 [2024-12-01 15:03:35.488513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:15104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.447 [2024-12-01 15:03:35.488525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:02.447 [2024-12-01 15:03:35.492916] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x124dd10) 00:23:02.448 [2024-12-01 15:03:35.492944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:20672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.448 [2024-12-01 15:03:35.492954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:02.448 [2024-12-01 15:03:35.496245] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x124dd10) 00:23:02.448 [2024-12-01 15:03:35.496274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:7520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.448 [2024-12-01 15:03:35.496285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:02.448 [2024-12-01 15:03:35.500101] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x124dd10) 00:23:02.448 [2024-12-01 15:03:35.500137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:22432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.448 [2024-12-01 15:03:35.500156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:02.448 [2024-12-01 15:03:35.503375] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x124dd10) 00:23:02.448 [2024-12-01 15:03:35.503416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:1280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.448 [2024-12-01 15:03:35.503426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:02.448 [2024-12-01 15:03:35.507101] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x124dd10) 00:23:02.448 [2024-12-01 15:03:35.507129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:9536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.448 [2024-12-01 15:03:35.507141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:02.448 [2024-12-01 15:03:35.509880] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x124dd10) 00:23:02.448 [2024-12-01 15:03:35.509908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:7936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.448 [2024-12-01 15:03:35.509919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:02.448 [2024-12-01 15:03:35.513343] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x124dd10) 00:23:02.448 [2024-12-01 15:03:35.513385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:23008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.448 [2024-12-01 15:03:35.513395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:02.448 [2024-12-01 15:03:35.517295] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x124dd10) 00:23:02.448 [2024-12-01 15:03:35.517323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:9728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.448 [2024-12-01 15:03:35.517333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:02.448 [2024-12-01 15:03:35.520795] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x124dd10) 00:23:02.448 [2024-12-01 15:03:35.520819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:4576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.448 [2024-12-01 15:03:35.520830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:02.448 [2024-12-01 15:03:35.524920] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x124dd10) 00:23:02.448 [2024-12-01 15:03:35.524946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:6144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.448 [2024-12-01 15:03:35.524958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:02.448 [2024-12-01 15:03:35.528432] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x124dd10) 00:23:02.448 [2024-12-01 15:03:35.528460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:20512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.448 [2024-12-01 15:03:35.528470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:02.448 [2024-12-01 15:03:35.531948] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x124dd10) 00:23:02.448 [2024-12-01 15:03:35.531989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:2080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.448 [2024-12-01 15:03:35.531999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:02.448 [2024-12-01 15:03:35.535705] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x124dd10) 00:23:02.448 [2024-12-01 15:03:35.535733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:4064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.448 [2024-12-01 15:03:35.535744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:02.448 [2024-12-01 15:03:35.539262] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x124dd10) 00:23:02.448 [2024-12-01 15:03:35.539303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:6464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.448 [2024-12-01 15:03:35.539315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:02.448 [2024-12-01 15:03:35.542619] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x124dd10) 00:23:02.448 [2024-12-01 15:03:35.542660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:14016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.448 [2024-12-01 15:03:35.542671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:02.448 [2024-12-01 15:03:35.546275] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x124dd10) 00:23:02.448 [2024-12-01 15:03:35.546316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:9024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.448 [2024-12-01 15:03:35.546328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:02.448 [2024-12-01 15:03:35.549665] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x124dd10) 00:23:02.448 [2024-12-01 15:03:35.549706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:1760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.448 [2024-12-01 15:03:35.549716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:02.448 [2024-12-01 15:03:35.553315] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x124dd10) 00:23:02.448 [2024-12-01 15:03:35.553356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:10688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.448 [2024-12-01 15:03:35.553367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:02.448 [2024-12-01 15:03:35.557383] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x124dd10) 00:23:02.448 [2024-12-01 15:03:35.557478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:12384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.448 [2024-12-01 15:03:35.557491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:02.706 [2024-12-01 15:03:35.561480] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x124dd10) 00:23:02.707 [2024-12-01 15:03:35.561529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:13664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.707 [2024-12-01 15:03:35.561542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:02.707 [2024-12-01 15:03:35.565357] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x124dd10) 00:23:02.707 [2024-12-01 15:03:35.565401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:22560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.707 [2024-12-01 15:03:35.565412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:02.707 [2024-12-01 15:03:35.568890] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x124dd10) 00:23:02.707 [2024-12-01 15:03:35.568919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:6752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.707 [2024-12-01 15:03:35.568931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:02.707 [2024-12-01 15:03:35.572501] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x124dd10) 00:23:02.707 [2024-12-01 15:03:35.572543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:22848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.707 [2024-12-01 15:03:35.572553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:02.707 [2024-12-01 15:03:35.576314] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x124dd10) 00:23:02.707 [2024-12-01 15:03:35.576355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:17984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.707 [2024-12-01 15:03:35.576365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:02.707 [2024-12-01 15:03:35.579696] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x124dd10) 00:23:02.707 [2024-12-01 15:03:35.579737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:18240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.707 [2024-12-01 15:03:35.579748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:02.707 [2024-12-01 15:03:35.583480] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x124dd10) 00:23:02.707 [2024-12-01 15:03:35.583509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:10752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.707 [2024-12-01 15:03:35.583520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:02.707 [2024-12-01 15:03:35.587423] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x124dd10) 00:23:02.707 [2024-12-01 15:03:35.587464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:10272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.707 [2024-12-01 15:03:35.587475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:02.707 [2024-12-01 15:03:35.591049] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x124dd10) 00:23:02.707 [2024-12-01 15:03:35.591091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:11296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.707 [2024-12-01 15:03:35.591101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:02.707 [2024-12-01 15:03:35.594147] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x124dd10) 00:23:02.707 [2024-12-01 15:03:35.594188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:14784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.707 [2024-12-01 15:03:35.594199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:02.707 [2024-12-01 15:03:35.598178] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x124dd10) 00:23:02.707 [2024-12-01 15:03:35.598207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:21984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.707 [2024-12-01 15:03:35.598218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:02.707 [2024-12-01 15:03:35.601695] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x124dd10) 00:23:02.707 [2024-12-01 15:03:35.601723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:1280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.707 [2024-12-01 15:03:35.601733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:02.707 [2024-12-01 15:03:35.606067] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x124dd10) 00:23:02.707 [2024-12-01 15:03:35.606095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:20096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.707 [2024-12-01 15:03:35.606106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:02.707 [2024-12-01 15:03:35.610151] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x124dd10) 00:23:02.707 [2024-12-01 15:03:35.610180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:2752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.707 [2024-12-01 15:03:35.610191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:02.707 [2024-12-01 15:03:35.613627] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x124dd10) 00:23:02.707 [2024-12-01 15:03:35.613656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:10368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.707 [2024-12-01 15:03:35.613667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:02.707 [2024-12-01 15:03:35.617935] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x124dd10) 00:23:02.707 [2024-12-01 15:03:35.617964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:8832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.707 [2024-12-01 15:03:35.617975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:02.707 [2024-12-01 15:03:35.621459] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x124dd10) 00:23:02.707 [2024-12-01 15:03:35.621489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:4032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.707 [2024-12-01 15:03:35.621500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:02.707 [2024-12-01 15:03:35.625727] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x124dd10) 00:23:02.707 [2024-12-01 15:03:35.625768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:7136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.707 [2024-12-01 15:03:35.625780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:02.707 [2024-12-01 15:03:35.629826] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x124dd10) 00:23:02.707 [2024-12-01 15:03:35.629869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:9024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.707 [2024-12-01 15:03:35.629880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:02.707 [2024-12-01 15:03:35.633367] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x124dd10) 00:23:02.707 [2024-12-01 15:03:35.633396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:16704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.707 [2024-12-01 15:03:35.633406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:02.707 [2024-12-01 15:03:35.637228] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x124dd10) 00:23:02.707 [2024-12-01 15:03:35.637256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.707 [2024-12-01 15:03:35.637269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:02.707 [2024-12-01 15:03:35.640680] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x124dd10) 00:23:02.707 [2024-12-01 15:03:35.640720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:6304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.707 [2024-12-01 15:03:35.640730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:02.707 [2024-12-01 15:03:35.644931] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x124dd10) 00:23:02.707 [2024-12-01 15:03:35.644960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:2240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.707 [2024-12-01 15:03:35.644970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:02.707 [2024-12-01 15:03:35.648463] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x124dd10) 00:23:02.707 [2024-12-01 15:03:35.648491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:7008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.707 [2024-12-01 15:03:35.648501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:02.707 [2024-12-01 15:03:35.652706] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x124dd10) 00:23:02.707 [2024-12-01 15:03:35.652746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:18432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.707 [2024-12-01 15:03:35.652768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:02.707 [2024-12-01 15:03:35.656953] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x124dd10) 00:23:02.707 [2024-12-01 15:03:35.656994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:11872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.707 [2024-12-01 15:03:35.657004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:02.707 [2024-12-01 15:03:35.660661] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x124dd10) 00:23:02.707 [2024-12-01 15:03:35.660689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.708 [2024-12-01 15:03:35.660700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:02.708 [2024-12-01 15:03:35.664858] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x124dd10) 00:23:02.708 [2024-12-01 15:03:35.664898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:15712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.708 [2024-12-01 15:03:35.664909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:02.708 [2024-12-01 15:03:35.668519] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x124dd10) 00:23:02.708 [2024-12-01 15:03:35.668546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:18656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.708 [2024-12-01 15:03:35.668558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:02.708 [2024-12-01 15:03:35.671979] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x124dd10) 00:23:02.708 [2024-12-01 15:03:35.672020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.708 [2024-12-01 15:03:35.672031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:02.708 [2024-12-01 15:03:35.675373] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x124dd10) 00:23:02.708 [2024-12-01 15:03:35.675415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:15904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.708 [2024-12-01 15:03:35.675426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:02.708 [2024-12-01 15:03:35.678929] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x124dd10) 00:23:02.708 [2024-12-01 15:03:35.678959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:7680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.708 [2024-12-01 15:03:35.678970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:02.708 [2024-12-01 15:03:35.682420] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x124dd10) 00:23:02.708 [2024-12-01 15:03:35.682453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:64 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.708 [2024-12-01 15:03:35.682472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:02.708 [2024-12-01 15:03:35.686092] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x124dd10) 00:23:02.708 [2024-12-01 15:03:35.686124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:14464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.708 [2024-12-01 15:03:35.686135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:02.708 [2024-12-01 15:03:35.689801] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x124dd10) 00:23:02.708 [2024-12-01 15:03:35.689853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:14816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.708 [2024-12-01 15:03:35.689864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:02.708 [2024-12-01 15:03:35.693318] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x124dd10) 00:23:02.708 [2024-12-01 15:03:35.693351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:11584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.708 [2024-12-01 15:03:35.693373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:02.708 [2024-12-01 15:03:35.697240] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x124dd10) 00:23:02.708 [2024-12-01 15:03:35.697273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:4608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.708 [2024-12-01 15:03:35.697295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:02.708 [2024-12-01 15:03:35.701167] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x124dd10) 00:23:02.708 [2024-12-01 15:03:35.701197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:9664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.708 [2024-12-01 15:03:35.701207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:02.708 [2024-12-01 15:03:35.703974] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x124dd10) 00:23:02.708 [2024-12-01 15:03:35.704005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:9248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.708 [2024-12-01 15:03:35.704024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:02.708 [2024-12-01 15:03:35.707645] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x124dd10) 00:23:02.708 [2024-12-01 15:03:35.707678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:19936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.708 [2024-12-01 15:03:35.707700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:02.708 [2024-12-01 15:03:35.711157] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x124dd10) 00:23:02.708 [2024-12-01 15:03:35.711189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:24608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.708 [2024-12-01 15:03:35.711200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:02.708 [2024-12-01 15:03:35.714960] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x124dd10) 00:23:02.708 [2024-12-01 15:03:35.715002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:7648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.708 [2024-12-01 15:03:35.715013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:02.708 [2024-12-01 15:03:35.719281] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x124dd10) 00:23:02.708 [2024-12-01 15:03:35.719311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:5600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.708 [2024-12-01 15:03:35.719322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:02.708 [2024-12-01 15:03:35.723649] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x124dd10) 00:23:02.708 [2024-12-01 15:03:35.723680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:7232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.708 [2024-12-01 15:03:35.723702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:02.708 [2024-12-01 15:03:35.727567] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x124dd10) 00:23:02.708 [2024-12-01 15:03:35.727597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:3712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.708 [2024-12-01 15:03:35.727620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:02.708 [2024-12-01 15:03:35.731345] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x124dd10) 00:23:02.708 [2024-12-01 15:03:35.731376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:32 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.708 [2024-12-01 15:03:35.731399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:02.708 [2024-12-01 15:03:35.734619] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x124dd10) 00:23:02.708 [2024-12-01 15:03:35.734652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:22048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.708 [2024-12-01 15:03:35.734674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:02.708 [2024-12-01 15:03:35.738018] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x124dd10) 00:23:02.708 [2024-12-01 15:03:35.738050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:6848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.708 [2024-12-01 15:03:35.738070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:02.708 [2024-12-01 15:03:35.741474] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x124dd10) 00:23:02.708 [2024-12-01 15:03:35.741506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.708 [2024-12-01 15:03:35.741525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:02.708 [2024-12-01 15:03:35.745440] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x124dd10) 00:23:02.708 [2024-12-01 15:03:35.745472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:1536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.708 [2024-12-01 15:03:35.745482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:02.708 [2024-12-01 15:03:35.749121] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x124dd10) 00:23:02.708 [2024-12-01 15:03:35.749152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:20960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.708 [2024-12-01 15:03:35.749163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:02.708 [2024-12-01 15:03:35.752788] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x124dd10) 00:23:02.708 [2024-12-01 15:03:35.752819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.708 [2024-12-01 15:03:35.752842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:02.708 [2024-12-01 15:03:35.756323] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x124dd10) 00:23:02.708 [2024-12-01 15:03:35.756356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:3488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.708 [2024-12-01 15:03:35.756377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:02.708 [2024-12-01 15:03:35.759996] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x124dd10) 00:23:02.708 [2024-12-01 15:03:35.760027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:17120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.709 [2024-12-01 15:03:35.760050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:02.709 [2024-12-01 15:03:35.763819] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x124dd10) 00:23:02.709 [2024-12-01 15:03:35.763850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.709 [2024-12-01 15:03:35.763873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:02.709 [2024-12-01 15:03:35.767386] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x124dd10) 00:23:02.709 [2024-12-01 15:03:35.767418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:11232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.709 [2024-12-01 15:03:35.767442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:02.709 [2024-12-01 15:03:35.771328] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x124dd10) 00:23:02.709 [2024-12-01 15:03:35.771359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.709 [2024-12-01 15:03:35.771369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:02.709 [2024-12-01 15:03:35.775148] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x124dd10) 00:23:02.709 [2024-12-01 15:03:35.775180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:12352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.709 [2024-12-01 15:03:35.775190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:02.709 [2024-12-01 15:03:35.778285] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x124dd10) 00:23:02.709 [2024-12-01 15:03:35.778316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:4416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.709 [2024-12-01 15:03:35.778338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:02.709 [2024-12-01 15:03:35.781078] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x124dd10) 00:23:02.709 [2024-12-01 15:03:35.781109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.709 [2024-12-01 15:03:35.781120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:02.709 [2024-12-01 15:03:35.784697] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x124dd10) 00:23:02.709 [2024-12-01 15:03:35.784729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:12032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.709 [2024-12-01 15:03:35.784763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:02.709 [2024-12-01 15:03:35.788672] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x124dd10) 00:23:02.709 [2024-12-01 15:03:35.788703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:10624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.709 [2024-12-01 15:03:35.788725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:02.709 [2024-12-01 15:03:35.792723] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x124dd10) 00:23:02.709 [2024-12-01 15:03:35.792766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:1216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.709 [2024-12-01 15:03:35.792779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:02.709 [2024-12-01 15:03:35.795782] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x124dd10) 00:23:02.709 [2024-12-01 15:03:35.795811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:19680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.709 [2024-12-01 15:03:35.795831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:02.709 [2024-12-01 15:03:35.799253] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x124dd10) 00:23:02.709 [2024-12-01 15:03:35.799284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:17344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.709 [2024-12-01 15:03:35.799295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:02.709 [2024-12-01 15:03:35.803550] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x124dd10) 00:23:02.709 [2024-12-01 15:03:35.803582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:11328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.709 [2024-12-01 15:03:35.803604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:02.709 [2024-12-01 15:03:35.806566] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x124dd10) 00:23:02.709 [2024-12-01 15:03:35.806598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:22432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.709 [2024-12-01 15:03:35.806620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:02.709 [2024-12-01 15:03:35.810664] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x124dd10) 00:23:02.709 [2024-12-01 15:03:35.810696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:11936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.709 [2024-12-01 15:03:35.810719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:02.709 [2024-12-01 15:03:35.813697] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x124dd10) 00:23:02.709 [2024-12-01 15:03:35.813729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:19008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.709 [2024-12-01 15:03:35.813740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:02.709 [2024-12-01 15:03:35.817474] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x124dd10) 00:23:02.709 [2024-12-01 15:03:35.817509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:12192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.709 [2024-12-01 15:03:35.817528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:02.968 [2024-12-01 15:03:35.821370] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x124dd10) 00:23:02.968 [2024-12-01 15:03:35.821404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:24704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.968 [2024-12-01 15:03:35.821435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:02.968 [2024-12-01 15:03:35.826302] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x124dd10) 00:23:02.968 [2024-12-01 15:03:35.826336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:20800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.968 [2024-12-01 15:03:35.826358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:02.968 [2024-12-01 15:03:35.829886] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x124dd10) 00:23:02.968 [2024-12-01 15:03:35.829917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:3232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.968 [2024-12-01 15:03:35.829928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:02.968 [2024-12-01 15:03:35.833641] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x124dd10) 00:23:02.968 [2024-12-01 15:03:35.833676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:11616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.968 [2024-12-01 15:03:35.833697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:02.968 [2024-12-01 15:03:35.836747] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x124dd10) 00:23:02.968 [2024-12-01 15:03:35.836787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:7232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.969 [2024-12-01 15:03:35.836807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:02.969 [2024-12-01 15:03:35.841181] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x124dd10) 00:23:02.969 [2024-12-01 15:03:35.841214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:5376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.969 [2024-12-01 15:03:35.841238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:02.969 [2024-12-01 15:03:35.844238] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x124dd10) 00:23:02.969 [2024-12-01 15:03:35.844270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:11936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.969 [2024-12-01 15:03:35.844280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:02.969 [2024-12-01 15:03:35.848173] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x124dd10) 00:23:02.969 [2024-12-01 15:03:35.848206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:7392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.969 [2024-12-01 15:03:35.848229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:02.969 [2024-12-01 15:03:35.851547] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x124dd10) 00:23:02.969 [2024-12-01 15:03:35.851579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:18976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.969 [2024-12-01 15:03:35.851601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:02.969 [2024-12-01 15:03:35.855329] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x124dd10) 00:23:02.969 [2024-12-01 15:03:35.855362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:8288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.969 [2024-12-01 15:03:35.855385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:02.969 [2024-12-01 15:03:35.858351] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x124dd10) 00:23:02.969 [2024-12-01 15:03:35.858382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.969 [2024-12-01 15:03:35.858405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:02.969 [2024-12-01 15:03:35.862035] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x124dd10) 00:23:02.969 [2024-12-01 15:03:35.862067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:18272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.969 [2024-12-01 15:03:35.862087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:02.969 [2024-12-01 15:03:35.865562] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x124dd10) 00:23:02.969 [2024-12-01 15:03:35.865593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.969 [2024-12-01 15:03:35.865613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:02.969 [2024-12-01 15:03:35.869658] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x124dd10) 00:23:02.969 [2024-12-01 15:03:35.869691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:19424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.969 [2024-12-01 15:03:35.869702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:02.969 [2024-12-01 15:03:35.872995] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x124dd10) 00:23:02.969 [2024-12-01 15:03:35.873038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:19424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.969 [2024-12-01 15:03:35.873060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:02.969 [2024-12-01 15:03:35.876371] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x124dd10) 00:23:02.969 [2024-12-01 15:03:35.876404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.969 [2024-12-01 15:03:35.876426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:02.969 [2024-12-01 15:03:35.880118] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x124dd10) 00:23:02.969 [2024-12-01 15:03:35.880151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:7552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.969 [2024-12-01 15:03:35.880174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:02.969 [2024-12-01 15:03:35.883697] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x124dd10) 00:23:02.969 [2024-12-01 15:03:35.883730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.969 [2024-12-01 15:03:35.883762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:02.969 [2024-12-01 15:03:35.887659] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x124dd10) 00:23:02.969 [2024-12-01 15:03:35.887691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:7392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.969 [2024-12-01 15:03:35.887714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:02.969 [2024-12-01 15:03:35.891553] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x124dd10) 00:23:02.969 [2024-12-01 15:03:35.891586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:9600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.969 [2024-12-01 15:03:35.891609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:02.969 [2024-12-01 15:03:35.895008] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x124dd10) 00:23:02.969 [2024-12-01 15:03:35.895039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:21216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.969 [2024-12-01 15:03:35.895050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:02.969 [2024-12-01 15:03:35.898564] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x124dd10) 00:23:02.969 [2024-12-01 15:03:35.898596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:24288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.969 [2024-12-01 15:03:35.898611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:02.969 [2024-12-01 15:03:35.901906] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x124dd10) 00:23:02.969 [2024-12-01 15:03:35.901939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:14688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.969 [2024-12-01 15:03:35.901950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:02.969 [2024-12-01 15:03:35.905370] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x124dd10) 00:23:02.969 [2024-12-01 15:03:35.905414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:22016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.969 [2024-12-01 15:03:35.905442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:02.969 [2024-12-01 15:03:35.909000] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x124dd10) 00:23:02.969 [2024-12-01 15:03:35.909041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:3552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.969 [2024-12-01 15:03:35.909052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:02.969 [2024-12-01 15:03:35.912853] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x124dd10) 00:23:02.969 [2024-12-01 15:03:35.912896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:9504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.969 [2024-12-01 15:03:35.912907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:02.969 [2024-12-01 15:03:35.916106] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x124dd10) 00:23:02.969 [2024-12-01 15:03:35.916151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:14176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.969 [2024-12-01 15:03:35.916168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:02.969 [2024-12-01 15:03:35.920239] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x124dd10) 00:23:02.969 [2024-12-01 15:03:35.920272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:18240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.969 [2024-12-01 15:03:35.920283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:02.969 [2024-12-01 15:03:35.923549] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x124dd10) 00:23:02.969 [2024-12-01 15:03:35.923581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:21472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.969 [2024-12-01 15:03:35.923604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:02.969 [2024-12-01 15:03:35.927491] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x124dd10) 00:23:02.969 [2024-12-01 15:03:35.927523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:16736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.969 [2024-12-01 15:03:35.927546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:02.969 [2024-12-01 15:03:35.931061] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x124dd10) 00:23:02.969 [2024-12-01 15:03:35.931094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:4768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.969 [2024-12-01 15:03:35.931105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:02.969 [2024-12-01 15:03:35.935007] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x124dd10) 00:23:02.969 [2024-12-01 15:03:35.935039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:16640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.970 [2024-12-01 15:03:35.935050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:02.970 [2024-12-01 15:03:35.938835] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x124dd10) 00:23:02.970 [2024-12-01 15:03:35.938866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:2848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.970 [2024-12-01 15:03:35.938886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:02.970 [2024-12-01 15:03:35.942711] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x124dd10) 00:23:02.970 [2024-12-01 15:03:35.942742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:3552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.970 [2024-12-01 15:03:35.942766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:02.970 [2024-12-01 15:03:35.946461] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x124dd10) 00:23:02.970 [2024-12-01 15:03:35.946493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:14688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.970 [2024-12-01 15:03:35.946514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:02.970 [2024-12-01 15:03:35.949814] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x124dd10) 00:23:02.970 [2024-12-01 15:03:35.949845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:20576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.970 [2024-12-01 15:03:35.949868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:02.970 [2024-12-01 15:03:35.952861] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x124dd10) 00:23:02.970 [2024-12-01 15:03:35.952892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:22816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.970 [2024-12-01 15:03:35.952914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:02.970 [2024-12-01 15:03:35.956621] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x124dd10) 00:23:02.970 [2024-12-01 15:03:35.956653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:1792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.970 [2024-12-01 15:03:35.956664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:02.970 [2024-12-01 15:03:35.960901] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x124dd10) 00:23:02.970 [2024-12-01 15:03:35.960933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:3904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.970 [2024-12-01 15:03:35.960944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:02.970 [2024-12-01 15:03:35.963635] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x124dd10) 00:23:02.970 [2024-12-01 15:03:35.963667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:7424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.970 [2024-12-01 15:03:35.963688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:02.970 [2024-12-01 15:03:35.967690] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x124dd10) 00:23:02.970 [2024-12-01 15:03:35.967723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:16512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.970 [2024-12-01 15:03:35.967746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:02.970 [2024-12-01 15:03:35.971183] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x124dd10) 00:23:02.970 [2024-12-01 15:03:35.971215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:15200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.970 [2024-12-01 15:03:35.971226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:02.970 [2024-12-01 15:03:35.974647] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x124dd10) 00:23:02.970 [2024-12-01 15:03:35.974680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:21088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.970 [2024-12-01 15:03:35.974702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:02.970 [2024-12-01 15:03:35.978179] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x124dd10) 00:23:02.970 [2024-12-01 15:03:35.978211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:12320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.970 [2024-12-01 15:03:35.978221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:02.970 [2024-12-01 15:03:35.981591] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x124dd10) 00:23:02.970 [2024-12-01 15:03:35.981625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:23104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.970 [2024-12-01 15:03:35.981645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:02.970 [2024-12-01 15:03:35.985149] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x124dd10) 00:23:02.970 [2024-12-01 15:03:35.985180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.970 [2024-12-01 15:03:35.985190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:02.970 [2024-12-01 15:03:35.989201] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x124dd10) 00:23:02.970 [2024-12-01 15:03:35.989232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:7840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.970 [2024-12-01 15:03:35.989243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:02.970 [2024-12-01 15:03:35.992334] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x124dd10) 00:23:02.970 [2024-12-01 15:03:35.992365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:11616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.970 [2024-12-01 15:03:35.992388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:02.970 [2024-12-01 15:03:35.996003] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x124dd10) 00:23:02.970 [2024-12-01 15:03:35.996047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:7520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.970 [2024-12-01 15:03:35.996059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:02.970 [2024-12-01 15:03:35.999462] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x124dd10) 00:23:02.970 [2024-12-01 15:03:35.999494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:16704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.970 [2024-12-01 15:03:35.999516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:02.970 [2024-12-01 15:03:36.002547] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x124dd10) 00:23:02.970 [2024-12-01 15:03:36.002579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:21984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.970 [2024-12-01 15:03:36.002602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:02.970 [2024-12-01 15:03:36.006494] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x124dd10) 00:23:02.970 [2024-12-01 15:03:36.006526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:18976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.970 [2024-12-01 15:03:36.006548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:02.970 [2024-12-01 15:03:36.009853] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x124dd10) 00:23:02.970 [2024-12-01 15:03:36.009885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:1760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.970 [2024-12-01 15:03:36.009908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:02.970 [2024-12-01 15:03:36.013707] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x124dd10) 00:23:02.970 [2024-12-01 15:03:36.013740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:7584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.970 [2024-12-01 15:03:36.013767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:02.970 [2024-12-01 15:03:36.016601] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x124dd10) 00:23:02.970 [2024-12-01 15:03:36.016632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:14368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.970 [2024-12-01 15:03:36.016655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:02.970 [2024-12-01 15:03:36.020565] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x124dd10) 00:23:02.970 [2024-12-01 15:03:36.020597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:6592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.970 [2024-12-01 15:03:36.020620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:02.970 [2024-12-01 15:03:36.024764] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x124dd10) 00:23:02.970 [2024-12-01 15:03:36.024793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:18816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.970 [2024-12-01 15:03:36.024817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:02.970 [2024-12-01 15:03:36.027460] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x124dd10) 00:23:02.970 [2024-12-01 15:03:36.027491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:7904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.970 [2024-12-01 15:03:36.027514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:02.970 [2024-12-01 15:03:36.031669] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x124dd10) 00:23:02.971 [2024-12-01 15:03:36.031702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:23424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.971 [2024-12-01 15:03:36.031713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:02.971 [2024-12-01 15:03:36.034746] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x124dd10) 00:23:02.971 [2024-12-01 15:03:36.034789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:4544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.971 [2024-12-01 15:03:36.034812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:02.971 [2024-12-01 15:03:36.038170] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x124dd10) 00:23:02.971 [2024-12-01 15:03:36.038202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:3648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.971 [2024-12-01 15:03:36.038213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:02.971 [2024-12-01 15:03:36.041477] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x124dd10) 00:23:02.971 [2024-12-01 15:03:36.041509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:15744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.971 [2024-12-01 15:03:36.041529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:02.971 [2024-12-01 15:03:36.044959] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x124dd10) 00:23:02.971 [2024-12-01 15:03:36.044990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:24480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.971 [2024-12-01 15:03:36.045013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:02.971 [2024-12-01 15:03:36.048864] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x124dd10) 00:23:02.971 [2024-12-01 15:03:36.048896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:20896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.971 [2024-12-01 15:03:36.048919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:02.971 [2024-12-01 15:03:36.052287] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x124dd10) 00:23:02.971 [2024-12-01 15:03:36.052319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:16448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.971 [2024-12-01 15:03:36.052341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:02.971 [2024-12-01 15:03:36.056180] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x124dd10) 00:23:02.971 [2024-12-01 15:03:36.056212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:23520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.971 [2024-12-01 15:03:36.056224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:02.971 [2024-12-01 15:03:36.059980] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x124dd10) 00:23:02.971 [2024-12-01 15:03:36.060012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:7808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.971 [2024-12-01 15:03:36.060035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:02.971 [2024-12-01 15:03:36.063147] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x124dd10) 00:23:02.971 [2024-12-01 15:03:36.063180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.971 [2024-12-01 15:03:36.063191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:02.971 [2024-12-01 15:03:36.066440] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x124dd10) 00:23:02.971 [2024-12-01 15:03:36.066473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:8256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.971 [2024-12-01 15:03:36.066494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:02.971 [2024-12-01 15:03:36.070191] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x124dd10) 00:23:02.971 [2024-12-01 15:03:36.070223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:3712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.971 [2024-12-01 15:03:36.070245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:02.971 [2024-12-01 15:03:36.073689] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x124dd10) 00:23:02.971 [2024-12-01 15:03:36.073722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:64 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.971 [2024-12-01 15:03:36.073742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:02.971 [2024-12-01 15:03:36.077461] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x124dd10) 00:23:02.971 [2024-12-01 15:03:36.077495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:20576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.971 [2024-12-01 15:03:36.077521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:03.230 [2024-12-01 15:03:36.081458] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x124dd10) 00:23:03.230 [2024-12-01 15:03:36.081494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:11552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.230 [2024-12-01 15:03:36.081516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:03.230 [2024-12-01 15:03:36.085063] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x124dd10) 00:23:03.230 [2024-12-01 15:03:36.085097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:6080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.230 [2024-12-01 15:03:36.085119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:03.230 [2024-12-01 15:03:36.088945] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x124dd10) 00:23:03.230 [2024-12-01 15:03:36.088979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:18112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.230 [2024-12-01 15:03:36.089000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:03.230 [2024-12-01 15:03:36.092593] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x124dd10) 00:23:03.230 [2024-12-01 15:03:36.092626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:14400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.230 [2024-12-01 15:03:36.092647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:03.230 [2024-12-01 15:03:36.096870] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x124dd10) 00:23:03.230 [2024-12-01 15:03:36.096901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:12288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.230 [2024-12-01 15:03:36.096923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:03.230 [2024-12-01 15:03:36.100378] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x124dd10) 00:23:03.230 [2024-12-01 15:03:36.100411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:20480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.230 [2024-12-01 15:03:36.100422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:03.230 [2024-12-01 15:03:36.104314] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x124dd10) 00:23:03.230 [2024-12-01 15:03:36.104347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:9632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.231 [2024-12-01 15:03:36.104368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:03.231 [2024-12-01 15:03:36.107420] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x124dd10) 00:23:03.231 [2024-12-01 15:03:36.107452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:6560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.231 [2024-12-01 15:03:36.107474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:03.231 [2024-12-01 15:03:36.110378] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x124dd10) 00:23:03.231 [2024-12-01 15:03:36.110409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:18368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.231 [2024-12-01 15:03:36.110432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:03.231 [2024-12-01 15:03:36.114648] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x124dd10) 00:23:03.231 [2024-12-01 15:03:36.114680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.231 [2024-12-01 15:03:36.114701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:03.231 [2024-12-01 15:03:36.118208] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x124dd10) 00:23:03.231 [2024-12-01 15:03:36.118240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:10048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.231 [2024-12-01 15:03:36.118251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:03.231 [2024-12-01 15:03:36.121458] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x124dd10) 00:23:03.231 [2024-12-01 15:03:36.121490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:11808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.231 [2024-12-01 15:03:36.121511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:03.231 [2024-12-01 15:03:36.124907] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x124dd10) 00:23:03.231 [2024-12-01 15:03:36.124939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:12608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.231 [2024-12-01 15:03:36.124962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:03.231 [2024-12-01 15:03:36.128330] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x124dd10) 00:23:03.231 [2024-12-01 15:03:36.128363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:2496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.231 [2024-12-01 15:03:36.128374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:03.231 [2024-12-01 15:03:36.132131] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x124dd10) 00:23:03.231 [2024-12-01 15:03:36.132163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:23424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.231 [2024-12-01 15:03:36.132187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:03.231 [2024-12-01 15:03:36.135513] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x124dd10) 00:23:03.231 [2024-12-01 15:03:36.135545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:22784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.231 [2024-12-01 15:03:36.135555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:03.231 [2024-12-01 15:03:36.138894] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x124dd10) 00:23:03.231 [2024-12-01 15:03:36.138927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.231 [2024-12-01 15:03:36.138946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:03.231 [2024-12-01 15:03:36.142767] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x124dd10) 00:23:03.231 [2024-12-01 15:03:36.142798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:3584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.231 [2024-12-01 15:03:36.142818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:03.231 [2024-12-01 15:03:36.146440] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x124dd10) 00:23:03.231 [2024-12-01 15:03:36.146473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.231 [2024-12-01 15:03:36.146495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:03.231 [2024-12-01 15:03:36.150054] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x124dd10) 00:23:03.231 [2024-12-01 15:03:36.150086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:14240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.231 [2024-12-01 15:03:36.150097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:03.231 [2024-12-01 15:03:36.154024] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x124dd10) 00:23:03.231 [2024-12-01 15:03:36.154056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:16192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.231 [2024-12-01 15:03:36.154078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:03.231 [2024-12-01 15:03:36.157907] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x124dd10) 00:23:03.231 [2024-12-01 15:03:36.157938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:24352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.231 [2024-12-01 15:03:36.157960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:03.231 [2024-12-01 15:03:36.162042] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x124dd10) 00:23:03.231 [2024-12-01 15:03:36.162072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:3296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.231 [2024-12-01 15:03:36.162083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:03.231 [2024-12-01 15:03:36.166051] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x124dd10) 00:23:03.231 [2024-12-01 15:03:36.166082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:19872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.231 [2024-12-01 15:03:36.166092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:03.231 [2024-12-01 15:03:36.168609] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x124dd10) 00:23:03.231 [2024-12-01 15:03:36.168639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:10176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.231 [2024-12-01 15:03:36.168662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:03.231 [2024-12-01 15:03:36.172484] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x124dd10) 00:23:03.231 [2024-12-01 15:03:36.172515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:5984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.231 [2024-12-01 15:03:36.172526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:03.231 [2024-12-01 15:03:36.176693] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x124dd10) 00:23:03.231 [2024-12-01 15:03:36.176724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:18464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.231 [2024-12-01 15:03:36.176735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:03.231 [2024-12-01 15:03:36.179770] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x124dd10) 00:23:03.231 [2024-12-01 15:03:36.179800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:19552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.232 [2024-12-01 15:03:36.179820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:03.232 [2024-12-01 15:03:36.183534] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x124dd10) 00:23:03.232 [2024-12-01 15:03:36.183566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.232 [2024-12-01 15:03:36.183589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:03.232 [2024-12-01 15:03:36.187623] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x124dd10) 00:23:03.232 [2024-12-01 15:03:36.187655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.232 [2024-12-01 15:03:36.187666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:03.232 [2024-12-01 15:03:36.191320] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x124dd10) 00:23:03.232 [2024-12-01 15:03:36.191352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:14752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.232 [2024-12-01 15:03:36.191363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:03.232 [2024-12-01 15:03:36.195374] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x124dd10) 00:23:03.232 [2024-12-01 15:03:36.195405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.232 [2024-12-01 15:03:36.195428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:03.232 [2024-12-01 15:03:36.199155] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x124dd10) 00:23:03.232 [2024-12-01 15:03:36.199186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:1088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.232 [2024-12-01 15:03:36.199197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:03.232 [2024-12-01 15:03:36.203024] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x124dd10) 00:23:03.232 [2024-12-01 15:03:36.203057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:21888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.232 [2024-12-01 15:03:36.203080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:03.232 [2024-12-01 15:03:36.206847] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x124dd10) 00:23:03.232 [2024-12-01 15:03:36.206879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:18272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.232 [2024-12-01 15:03:36.206902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:03.232 [2024-12-01 15:03:36.210391] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x124dd10) 00:23:03.232 [2024-12-01 15:03:36.210424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:23520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.232 [2024-12-01 15:03:36.210445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:03.232 [2024-12-01 15:03:36.213605] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x124dd10) 00:23:03.232 [2024-12-01 15:03:36.213637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:3104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.232 [2024-12-01 15:03:36.213657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:03.232 [2024-12-01 15:03:36.217098] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x124dd10) 00:23:03.232 [2024-12-01 15:03:36.217130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:17984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.232 [2024-12-01 15:03:36.217149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:03.232 [2024-12-01 15:03:36.220127] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x124dd10) 00:23:03.232 [2024-12-01 15:03:36.220158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:3392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.232 [2024-12-01 15:03:36.220168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:03.232 [2024-12-01 15:03:36.223868] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x124dd10) 00:23:03.232 [2024-12-01 15:03:36.223899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:15744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.232 [2024-12-01 15:03:36.223922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:03.232 [2024-12-01 15:03:36.227894] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x124dd10) 00:23:03.232 [2024-12-01 15:03:36.227926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:7264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.232 [2024-12-01 15:03:36.227948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:03.232 [2024-12-01 15:03:36.231467] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x124dd10) 00:23:03.232 [2024-12-01 15:03:36.231500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:12288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.232 [2024-12-01 15:03:36.231522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:03.232 [2024-12-01 15:03:36.235219] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x124dd10) 00:23:03.232 [2024-12-01 15:03:36.235252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:9504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.232 [2024-12-01 15:03:36.235274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:03.232 [2024-12-01 15:03:36.238865] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x124dd10) 00:23:03.232 [2024-12-01 15:03:36.238897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:15744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.232 [2024-12-01 15:03:36.238907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:03.232 [2024-12-01 15:03:36.242002] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x124dd10) 00:23:03.232 [2024-12-01 15:03:36.242034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:12768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.232 [2024-12-01 15:03:36.242053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:03.232 [2024-12-01 15:03:36.245588] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x124dd10) 00:23:03.232 [2024-12-01 15:03:36.245621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:17248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.232 [2024-12-01 15:03:36.245642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:03.232 [2024-12-01 15:03:36.249578] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x124dd10) 00:23:03.232 [2024-12-01 15:03:36.249610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:24320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.232 [2024-12-01 15:03:36.249631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:03.232 [2024-12-01 15:03:36.253074] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x124dd10) 00:23:03.232 [2024-12-01 15:03:36.253106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:2944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.232 [2024-12-01 15:03:36.253117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:03.233 [2024-12-01 15:03:36.256990] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x124dd10) 00:23:03.233 [2024-12-01 15:03:36.257023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.233 [2024-12-01 15:03:36.257044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:03.233 [2024-12-01 15:03:36.260371] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x124dd10) 00:23:03.233 [2024-12-01 15:03:36.260404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:1632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.233 [2024-12-01 15:03:36.260426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:03.233 [2024-12-01 15:03:36.264164] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x124dd10) 00:23:03.233 [2024-12-01 15:03:36.264196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:23776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.233 [2024-12-01 15:03:36.264220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:03.233 [2024-12-01 15:03:36.267285] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x124dd10) 00:23:03.233 [2024-12-01 15:03:36.267316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:13536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.233 [2024-12-01 15:03:36.267338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:03.233 [2024-12-01 15:03:36.270846] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x124dd10) 00:23:03.233 [2024-12-01 15:03:36.270874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:1696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.233 [2024-12-01 15:03:36.270894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:03.233 [2024-12-01 15:03:36.274489] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x124dd10) 00:23:03.233 [2024-12-01 15:03:36.274521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:11744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.233 [2024-12-01 15:03:36.274543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:03.233 [2024-12-01 15:03:36.278287] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x124dd10) 00:23:03.233 [2024-12-01 15:03:36.278320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:22144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.233 [2024-12-01 15:03:36.278343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:03.233 [2024-12-01 15:03:36.282428] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x124dd10) 00:23:03.233 [2024-12-01 15:03:36.282461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:18688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.233 [2024-12-01 15:03:36.282484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:03.233 [2024-12-01 15:03:36.285446] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x124dd10) 00:23:03.233 [2024-12-01 15:03:36.285478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:7584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.233 [2024-12-01 15:03:36.285498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:03.233 00:23:03.233 Latency(us) 00:23:03.233 [2024-12-01T15:03:36.348Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:03.233 [2024-12-01T15:03:36.348Z] Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:23:03.233 nvme0n1 : 2.00 8345.42 1043.18 0.00 0.00 1914.37 517.59 8460.10 00:23:03.233 [2024-12-01T15:03:36.348Z] =================================================================================================================== 00:23:03.233 [2024-12-01T15:03:36.348Z] Total : 8345.42 1043.18 0.00 0.00 1914.37 517.59 8460.10 00:23:03.233 0 00:23:03.233 15:03:36 -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:23:03.233 15:03:36 -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:23:03.233 | .driver_specific 00:23:03.233 | .nvme_error 00:23:03.233 | .status_code 00:23:03.233 | .command_transient_transport_error' 00:23:03.233 15:03:36 -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:23:03.233 15:03:36 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:23:03.490 15:03:36 -- host/digest.sh@71 -- # (( 538 > 0 )) 00:23:03.490 15:03:36 -- host/digest.sh@73 -- # killprocess 97892 00:23:03.490 15:03:36 -- common/autotest_common.sh@936 -- # '[' -z 97892 ']' 00:23:03.490 15:03:36 -- common/autotest_common.sh@940 -- # kill -0 97892 00:23:03.490 15:03:36 -- common/autotest_common.sh@941 -- # uname 00:23:03.490 15:03:36 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:23:03.490 15:03:36 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 97892 00:23:03.490 15:03:36 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:23:03.490 killing process with pid 97892 00:23:03.490 Received shutdown signal, test time was about 2.000000 seconds 00:23:03.490 00:23:03.490 Latency(us) 00:23:03.490 [2024-12-01T15:03:36.605Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:03.490 [2024-12-01T15:03:36.605Z] =================================================================================================================== 00:23:03.490 [2024-12-01T15:03:36.605Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:03.490 15:03:36 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:23:03.490 15:03:36 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 97892' 00:23:03.490 15:03:36 -- common/autotest_common.sh@955 -- # kill 97892 00:23:03.490 15:03:36 -- common/autotest_common.sh@960 -- # wait 97892 00:23:03.749 15:03:36 -- host/digest.sh@113 -- # run_bperf_err randwrite 4096 128 00:23:03.749 15:03:36 -- host/digest.sh@54 -- # local rw bs qd 00:23:03.749 15:03:36 -- host/digest.sh@56 -- # rw=randwrite 00:23:03.749 15:03:36 -- host/digest.sh@56 -- # bs=4096 00:23:03.749 15:03:36 -- host/digest.sh@56 -- # qd=128 00:23:03.749 15:03:36 -- host/digest.sh@58 -- # bperfpid=97981 00:23:03.749 15:03:36 -- host/digest.sh@60 -- # waitforlisten 97981 /var/tmp/bperf.sock 00:23:03.749 15:03:36 -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z 00:23:03.749 15:03:36 -- common/autotest_common.sh@829 -- # '[' -z 97981 ']' 00:23:03.749 15:03:36 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:23:03.749 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:23:03.749 15:03:36 -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:03.749 15:03:36 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:23:03.749 15:03:36 -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:03.749 15:03:36 -- common/autotest_common.sh@10 -- # set +x 00:23:03.749 [2024-12-01 15:03:36.830900] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:23:03.749 [2024-12-01 15:03:36.831009] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid97981 ] 00:23:04.006 [2024-12-01 15:03:36.970390] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:04.006 [2024-12-01 15:03:37.014658] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:23:04.939 15:03:37 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:04.939 15:03:37 -- common/autotest_common.sh@862 -- # return 0 00:23:04.939 15:03:37 -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:23:04.939 15:03:37 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:23:05.198 15:03:38 -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:23:05.198 15:03:38 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:05.198 15:03:38 -- common/autotest_common.sh@10 -- # set +x 00:23:05.198 15:03:38 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:05.198 15:03:38 -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:23:05.198 15:03:38 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:23:05.456 nvme0n1 00:23:05.456 15:03:38 -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:23:05.456 15:03:38 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:05.456 15:03:38 -- common/autotest_common.sh@10 -- # set +x 00:23:05.456 15:03:38 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:05.456 15:03:38 -- host/digest.sh@69 -- # bperf_py perform_tests 00:23:05.456 15:03:38 -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:23:05.456 Running I/O for 2 seconds... 00:23:05.456 [2024-12-01 15:03:38.487862] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb20e0) with pdu=0x2000190eea00 00:23:05.456 [2024-12-01 15:03:38.488820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:11027 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:05.456 [2024-12-01 15:03:38.488873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:23:05.456 [2024-12-01 15:03:38.497645] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb20e0) with pdu=0x2000190ea680 00:23:05.456 [2024-12-01 15:03:38.498301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:22516 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:05.456 [2024-12-01 15:03:38.498333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:23:05.456 [2024-12-01 15:03:38.507240] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb20e0) with pdu=0x2000190e99d8 00:23:05.456 [2024-12-01 15:03:38.507832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:16999 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:05.456 [2024-12-01 15:03:38.507883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:23:05.456 [2024-12-01 15:03:38.516546] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb20e0) with pdu=0x2000190de8a8 00:23:05.456 [2024-12-01 15:03:38.517668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:24652 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:05.456 [2024-12-01 15:03:38.517700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:23:05.456 [2024-12-01 15:03:38.526148] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb20e0) with pdu=0x2000190f3e60 00:23:05.456 [2024-12-01 15:03:38.526663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:7865 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:05.456 [2024-12-01 15:03:38.526693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:23:05.456 [2024-12-01 15:03:38.535692] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb20e0) with pdu=0x2000190de8a8 00:23:05.456 [2024-12-01 15:03:38.536278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:2495 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:05.456 [2024-12-01 15:03:38.536314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:23:05.456 [2024-12-01 15:03:38.545094] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb20e0) with pdu=0x2000190e5ec8 00:23:05.456 [2024-12-01 15:03:38.545497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:9471 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:05.456 [2024-12-01 15:03:38.545524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:23:05.456 [2024-12-01 15:03:38.554515] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb20e0) with pdu=0x2000190f57b0 00:23:05.456 [2024-12-01 15:03:38.555192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:14087 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:05.456 [2024-12-01 15:03:38.555223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:23:05.456 [2024-12-01 15:03:38.565642] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb20e0) with pdu=0x2000190f3e60 00:23:05.456 [2024-12-01 15:03:38.566976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8258 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:05.456 [2024-12-01 15:03:38.567010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:05.715 [2024-12-01 15:03:38.574537] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb20e0) with pdu=0x2000190f7100 00:23:05.715 [2024-12-01 15:03:38.575221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:6506 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:05.715 [2024-12-01 15:03:38.575254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.715 [2024-12-01 15:03:38.582377] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb20e0) with pdu=0x2000190ed0b0 00:23:05.715 [2024-12-01 15:03:38.582584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:5203 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:05.715 [2024-12-01 15:03:38.582604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:23:05.715 [2024-12-01 15:03:38.591713] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb20e0) with pdu=0x2000190ec840 00:23:05.715 [2024-12-01 15:03:38.592309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:19696 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:05.715 [2024-12-01 15:03:38.592336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:23:05.715 [2024-12-01 15:03:38.600852] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb20e0) with pdu=0x2000190efae0 00:23:05.715 [2024-12-01 15:03:38.601051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:10107 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:05.715 [2024-12-01 15:03:38.601071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:23:05.715 [2024-12-01 15:03:38.610429] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb20e0) with pdu=0x2000190f7100 00:23:05.715 [2024-12-01 15:03:38.611159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:21391 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:05.715 [2024-12-01 15:03:38.611190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:23:05.715 [2024-12-01 15:03:38.620467] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb20e0) with pdu=0x2000190ee190 00:23:05.715 [2024-12-01 15:03:38.621642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:17192 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:05.715 [2024-12-01 15:03:38.621673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:23:05.715 [2024-12-01 15:03:38.629728] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb20e0) with pdu=0x2000190e4578 00:23:05.715 [2024-12-01 15:03:38.630887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:5978 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:05.715 [2024-12-01 15:03:38.630916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:23:05.715 [2024-12-01 15:03:38.638978] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb20e0) with pdu=0x2000190e4de8 00:23:05.715 [2024-12-01 15:03:38.640145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:2977 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:05.715 [2024-12-01 15:03:38.640174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:23:05.715 [2024-12-01 15:03:38.648282] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb20e0) with pdu=0x2000190ebb98 00:23:05.715 [2024-12-01 15:03:38.648841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:19381 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:05.715 [2024-12-01 15:03:38.648880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:23:05.715 [2024-12-01 15:03:38.657787] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb20e0) with pdu=0x2000190e1710 00:23:05.715 [2024-12-01 15:03:38.658875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:17076 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:05.715 [2024-12-01 15:03:38.658907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:23:05.715 [2024-12-01 15:03:38.666263] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb20e0) with pdu=0x2000190e6b70 00:23:05.715 [2024-12-01 15:03:38.667519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:6533 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:05.715 [2024-12-01 15:03:38.667549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:23:05.715 [2024-12-01 15:03:38.676006] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb20e0) with pdu=0x2000190e3498 00:23:05.715 [2024-12-01 15:03:38.676872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:356 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:05.715 [2024-12-01 15:03:38.676902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:23:05.715 [2024-12-01 15:03:38.685220] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb20e0) with pdu=0x2000190e99d8 00:23:05.715 [2024-12-01 15:03:38.685802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:8793 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:05.715 [2024-12-01 15:03:38.685838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:23:05.715 [2024-12-01 15:03:38.694422] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb20e0) with pdu=0x2000190ea248 00:23:05.715 [2024-12-01 15:03:38.694976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:9371 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:05.715 [2024-12-01 15:03:38.695014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:23:05.715 [2024-12-01 15:03:38.702893] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb20e0) with pdu=0x2000190e9168 00:23:05.715 [2024-12-01 15:03:38.703314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:5794 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:05.715 [2024-12-01 15:03:38.703340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:23:05.715 [2024-12-01 15:03:38.712714] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb20e0) with pdu=0x2000190efae0 00:23:05.715 [2024-12-01 15:03:38.713884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:2270 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:05.715 [2024-12-01 15:03:38.713915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:23:05.715 [2024-12-01 15:03:38.721930] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb20e0) with pdu=0x2000190eee38 00:23:05.715 [2024-12-01 15:03:38.723039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:18146 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:05.715 [2024-12-01 15:03:38.723069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:23:05.715 [2024-12-01 15:03:38.731259] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb20e0) with pdu=0x2000190fb8b8 00:23:05.715 [2024-12-01 15:03:38.732145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:6389 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:05.715 [2024-12-01 15:03:38.732175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:23:05.715 [2024-12-01 15:03:38.740468] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb20e0) with pdu=0x2000190ed920 00:23:05.715 [2024-12-01 15:03:38.740797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:16341 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:05.715 [2024-12-01 15:03:38.740816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:23:05.715 [2024-12-01 15:03:38.749789] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb20e0) with pdu=0x2000190fc128 00:23:05.715 [2024-12-01 15:03:38.750935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:14239 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:05.715 [2024-12-01 15:03:38.750968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:23:05.715 [2024-12-01 15:03:38.758875] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb20e0) with pdu=0x2000190f9f68 00:23:05.715 [2024-12-01 15:03:38.759685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:25001 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:05.715 [2024-12-01 15:03:38.759716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:23:05.715 [2024-12-01 15:03:38.768109] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb20e0) with pdu=0x2000190f0ff8 00:23:05.715 [2024-12-01 15:03:38.768343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:11333 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:05.715 [2024-12-01 15:03:38.768375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:23:05.715 [2024-12-01 15:03:38.777547] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb20e0) with pdu=0x2000190ec840 00:23:05.715 [2024-12-01 15:03:38.778362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:14893 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:05.715 [2024-12-01 15:03:38.778392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:23:05.715 [2024-12-01 15:03:38.787065] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb20e0) with pdu=0x2000190ea680 00:23:05.715 [2024-12-01 15:03:38.787961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:1699 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:05.715 [2024-12-01 15:03:38.787991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:23:05.715 [2024-12-01 15:03:38.796263] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb20e0) with pdu=0x2000190efae0 00:23:05.715 [2024-12-01 15:03:38.797651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:13818 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:05.715 [2024-12-01 15:03:38.797682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:23:05.715 [2024-12-01 15:03:38.806286] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb20e0) with pdu=0x2000190e12d8 00:23:05.715 [2024-12-01 15:03:38.807159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:8577 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:05.715 [2024-12-01 15:03:38.807189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:23:05.715 [2024-12-01 15:03:38.815455] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb20e0) with pdu=0x2000190ed4e8 00:23:05.715 [2024-12-01 15:03:38.816733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:1196 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:05.715 [2024-12-01 15:03:38.816771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:23:05.715 [2024-12-01 15:03:38.825326] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb20e0) with pdu=0x2000190ddc00 00:23:05.715 [2024-12-01 15:03:38.827116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:6234 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:05.715 [2024-12-01 15:03:38.827158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:23:05.974 [2024-12-01 15:03:38.833727] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb20e0) with pdu=0x2000190e3498 00:23:05.974 [2024-12-01 15:03:38.834893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:2061 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:05.974 [2024-12-01 15:03:38.834925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:23:05.974 [2024-12-01 15:03:38.843020] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb20e0) with pdu=0x2000190ecc78 00:23:05.974 [2024-12-01 15:03:38.843323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:9231 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:05.974 [2024-12-01 15:03:38.843350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:23:05.974 [2024-12-01 15:03:38.852169] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb20e0) with pdu=0x2000190f20d8 00:23:05.974 [2024-12-01 15:03:38.853143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:15941 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:05.974 [2024-12-01 15:03:38.853175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:23:05.974 [2024-12-01 15:03:38.861677] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb20e0) with pdu=0x2000190ecc78 00:23:05.974 [2024-12-01 15:03:38.862355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2718 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:05.974 [2024-12-01 15:03:38.862386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:23:05.974 [2024-12-01 15:03:38.870948] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb20e0) with pdu=0x2000190fe2e8 00:23:05.974 [2024-12-01 15:03:38.871405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:13067 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:05.974 [2024-12-01 15:03:38.871432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:23:05.974 [2024-12-01 15:03:38.880378] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb20e0) with pdu=0x2000190f5be8 00:23:05.974 [2024-12-01 15:03:38.881261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:21957 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:05.974 [2024-12-01 15:03:38.881291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:23:05.974 [2024-12-01 15:03:38.889555] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb20e0) with pdu=0x2000190fc560 00:23:05.974 [2024-12-01 15:03:38.890959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:8151 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:05.974 [2024-12-01 15:03:38.890996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:23:05.974 [2024-12-01 15:03:38.899311] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb20e0) with pdu=0x2000190fb480 00:23:05.974 [2024-12-01 15:03:38.899943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:13503 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:05.974 [2024-12-01 15:03:38.899982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:23:05.974 [2024-12-01 15:03:38.908575] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb20e0) with pdu=0x2000190f8e88 00:23:05.974 [2024-12-01 15:03:38.909251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:23310 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:05.974 [2024-12-01 15:03:38.909281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:23:05.974 [2024-12-01 15:03:38.917736] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb20e0) with pdu=0x2000190fc998 00:23:05.974 [2024-12-01 15:03:38.918494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:232 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:05.974 [2024-12-01 15:03:38.918525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:23:05.974 [2024-12-01 15:03:38.926974] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb20e0) with pdu=0x2000190e23b8 00:23:05.975 [2024-12-01 15:03:38.927662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:11830 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:05.975 [2024-12-01 15:03:38.927692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:23:05.975 [2024-12-01 15:03:38.935903] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb20e0) with pdu=0x2000190e1f80 00:23:05.975 [2024-12-01 15:03:38.936665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:23929 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:05.975 [2024-12-01 15:03:38.936695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:23:05.975 [2024-12-01 15:03:38.945633] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb20e0) with pdu=0x2000190fb480 00:23:05.975 [2024-12-01 15:03:38.946653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:800 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:05.975 [2024-12-01 15:03:38.946682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:23:05.975 [2024-12-01 15:03:38.954970] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb20e0) with pdu=0x2000190e4140 00:23:05.975 [2024-12-01 15:03:38.955787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:16942 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:05.975 [2024-12-01 15:03:38.955828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:23:05.975 [2024-12-01 15:03:38.965086] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb20e0) with pdu=0x2000190f0788 00:23:05.975 [2024-12-01 15:03:38.965576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:23585 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:05.975 [2024-12-01 15:03:38.965602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:23:05.975 [2024-12-01 15:03:38.975754] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb20e0) with pdu=0x2000190fc560 00:23:05.975 [2024-12-01 15:03:38.976268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:17659 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:05.975 [2024-12-01 15:03:38.976294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:23:05.975 [2024-12-01 15:03:38.986158] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb20e0) with pdu=0x2000190eea00 00:23:05.975 [2024-12-01 15:03:38.986651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:14138 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:05.975 [2024-12-01 15:03:38.986677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:23:05.975 [2024-12-01 15:03:38.995836] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb20e0) with pdu=0x2000190e5220 00:23:05.975 [2024-12-01 15:03:38.996727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:16060 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:05.975 [2024-12-01 15:03:38.996770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:23:05.975 [2024-12-01 15:03:39.005278] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb20e0) with pdu=0x2000190e5658 00:23:05.975 [2024-12-01 15:03:39.005850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:23628 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:05.975 [2024-12-01 15:03:39.005876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:23:05.975 [2024-12-01 15:03:39.014624] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb20e0) with pdu=0x2000190fe2e8 00:23:05.975 [2024-12-01 15:03:39.015543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:25314 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:05.975 [2024-12-01 15:03:39.015585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:23:05.975 [2024-12-01 15:03:39.022845] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb20e0) with pdu=0x2000190e12d8 00:23:05.975 [2024-12-01 15:03:39.022945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:7750 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:05.975 [2024-12-01 15:03:39.022965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:23:05.975 [2024-12-01 15:03:39.032060] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb20e0) with pdu=0x2000190fda78 00:23:05.975 [2024-12-01 15:03:39.032995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:7719 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:05.975 [2024-12-01 15:03:39.033026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:23:05.975 [2024-12-01 15:03:39.042941] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb20e0) with pdu=0x2000190f7100 00:23:05.975 [2024-12-01 15:03:39.043623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:1879 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:05.975 [2024-12-01 15:03:39.043654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:05.975 [2024-12-01 15:03:39.052298] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb20e0) with pdu=0x2000190ef270 00:23:05.975 [2024-12-01 15:03:39.052962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:5178 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:05.975 [2024-12-01 15:03:39.052991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.975 [2024-12-01 15:03:39.061612] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb20e0) with pdu=0x2000190eea00 00:23:05.975 [2024-12-01 15:03:39.062311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:15389 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:05.975 [2024-12-01 15:03:39.062346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:23:05.975 [2024-12-01 15:03:39.070835] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb20e0) with pdu=0x2000190fa3a0 00:23:05.975 [2024-12-01 15:03:39.071495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:1140 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:05.975 [2024-12-01 15:03:39.071523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:23:05.975 [2024-12-01 15:03:39.080101] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb20e0) with pdu=0x2000190e4578 00:23:05.975 [2024-12-01 15:03:39.080774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:10707 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:05.975 [2024-12-01 15:03:39.080802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:23:06.234 [2024-12-01 15:03:39.088822] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb20e0) with pdu=0x2000190f4298 00:23:06.234 [2024-12-01 15:03:39.089546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:12674 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:06.234 [2024-12-01 15:03:39.089580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:23:06.234 [2024-12-01 15:03:39.098553] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb20e0) with pdu=0x2000190e6fa8 00:23:06.234 [2024-12-01 15:03:39.099515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:6024 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:06.234 [2024-12-01 15:03:39.099547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:23:06.234 [2024-12-01 15:03:39.107521] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb20e0) with pdu=0x2000190e6300 00:23:06.234 [2024-12-01 15:03:39.107698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:23578 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:06.234 [2024-12-01 15:03:39.107719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:23:06.234 [2024-12-01 15:03:39.117817] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb20e0) with pdu=0x2000190ea248 00:23:06.234 [2024-12-01 15:03:39.118619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:17988 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:06.234 [2024-12-01 15:03:39.118650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:23:06.234 [2024-12-01 15:03:39.128118] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb20e0) with pdu=0x2000190f4298 00:23:06.234 [2024-12-01 15:03:39.128272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:14504 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:06.234 [2024-12-01 15:03:39.128296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:23:06.234 [2024-12-01 15:03:39.138697] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb20e0) with pdu=0x2000190e84c0 00:23:06.234 [2024-12-01 15:03:39.139165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:13377 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:06.234 [2024-12-01 15:03:39.139190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:23:06.234 [2024-12-01 15:03:39.147841] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb20e0) with pdu=0x2000190fa3a0 00:23:06.234 [2024-12-01 15:03:39.148170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:17407 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:06.234 [2024-12-01 15:03:39.148201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:23:06.234 [2024-12-01 15:03:39.157664] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb20e0) with pdu=0x2000190e8088 00:23:06.234 [2024-12-01 15:03:39.158002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:18201 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:06.234 [2024-12-01 15:03:39.158026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:23:06.234 [2024-12-01 15:03:39.167059] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb20e0) with pdu=0x2000190fef90 00:23:06.234 [2024-12-01 15:03:39.168051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:3848 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:06.234 [2024-12-01 15:03:39.168093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:23:06.234 [2024-12-01 15:03:39.177070] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb20e0) with pdu=0x2000190ddc00 00:23:06.234 [2024-12-01 15:03:39.177565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:23794 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:06.234 [2024-12-01 15:03:39.177590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:23:06.234 [2024-12-01 15:03:39.186910] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb20e0) with pdu=0x2000190ee5c8 00:23:06.234 [2024-12-01 15:03:39.187351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:17783 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:06.234 [2024-12-01 15:03:39.187376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:23:06.234 [2024-12-01 15:03:39.196687] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb20e0) with pdu=0x2000190e38d0 00:23:06.235 [2024-12-01 15:03:39.197127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:5301 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:06.235 [2024-12-01 15:03:39.197151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:23:06.235 [2024-12-01 15:03:39.207511] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb20e0) with pdu=0x2000190e5658 00:23:06.235 [2024-12-01 15:03:39.208965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:9073 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:06.235 [2024-12-01 15:03:39.208995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:23:06.235 [2024-12-01 15:03:39.216070] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb20e0) with pdu=0x2000190f1ca0 00:23:06.235 [2024-12-01 15:03:39.216408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:8501 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:06.235 [2024-12-01 15:03:39.216432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:23:06.235 [2024-12-01 15:03:39.225489] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb20e0) with pdu=0x2000190fef90 00:23:06.235 [2024-12-01 15:03:39.225826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:5703 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:06.235 [2024-12-01 15:03:39.225851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:23:06.235 [2024-12-01 15:03:39.235234] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb20e0) with pdu=0x2000190ebb98 00:23:06.235 [2024-12-01 15:03:39.236174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:9064 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:06.235 [2024-12-01 15:03:39.236213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:23:06.235 [2024-12-01 15:03:39.244913] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb20e0) with pdu=0x2000190e73e0 00:23:06.235 [2024-12-01 15:03:39.246057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:21536 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:06.235 [2024-12-01 15:03:39.246097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:23:06.235 [2024-12-01 15:03:39.254305] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb20e0) with pdu=0x2000190e0a68 00:23:06.235 [2024-12-01 15:03:39.255097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:13201 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:06.235 [2024-12-01 15:03:39.255127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:23:06.235 [2024-12-01 15:03:39.263599] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb20e0) with pdu=0x2000190e4de8 00:23:06.235 [2024-12-01 15:03:39.264400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:6823 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:06.235 [2024-12-01 15:03:39.264430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:23:06.235 [2024-12-01 15:03:39.274219] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb20e0) with pdu=0x2000190f96f8 00:23:06.235 [2024-12-01 15:03:39.274736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:10612 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:06.235 [2024-12-01 15:03:39.274768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:23:06.235 [2024-12-01 15:03:39.284582] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb20e0) with pdu=0x2000190f6020 00:23:06.235 [2024-12-01 15:03:39.285311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:23746 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:06.235 [2024-12-01 15:03:39.285341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:23:06.235 [2024-12-01 15:03:39.293029] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb20e0) with pdu=0x2000190de8a8 00:23:06.235 [2024-12-01 15:03:39.293777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:18307 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:06.235 [2024-12-01 15:03:39.293806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:23:06.235 [2024-12-01 15:03:39.304038] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb20e0) with pdu=0x2000190f31b8 00:23:06.235 [2024-12-01 15:03:39.304694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:17210 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:06.235 [2024-12-01 15:03:39.304720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:23:06.235 [2024-12-01 15:03:39.312195] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb20e0) with pdu=0x2000190e4de8 00:23:06.235 [2024-12-01 15:03:39.313008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:16652 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:06.235 [2024-12-01 15:03:39.313035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:23:06.235 [2024-12-01 15:03:39.324063] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb20e0) with pdu=0x2000190ddc00 00:23:06.235 [2024-12-01 15:03:39.325555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:24691 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:06.235 [2024-12-01 15:03:39.325585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:06.235 [2024-12-01 15:03:39.333023] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb20e0) with pdu=0x2000190fac10 00:23:06.235 [2024-12-01 15:03:39.333932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:15217 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:06.235 [2024-12-01 15:03:39.333966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:23:06.235 [2024-12-01 15:03:39.341132] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb20e0) with pdu=0x2000190df988 00:23:06.235 [2024-12-01 15:03:39.342018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:25367 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:06.235 [2024-12-01 15:03:39.342057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:23:06.494 [2024-12-01 15:03:39.351421] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb20e0) with pdu=0x2000190e5658 00:23:06.494 [2024-12-01 15:03:39.351947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:21072 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:06.494 [2024-12-01 15:03:39.351977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:23:06.494 [2024-12-01 15:03:39.362115] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb20e0) with pdu=0x2000190e4578 00:23:06.494 [2024-12-01 15:03:39.363076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:4036 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:06.494 [2024-12-01 15:03:39.363107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:23:06.494 [2024-12-01 15:03:39.370429] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb20e0) with pdu=0x2000190f7da8 00:23:06.494 [2024-12-01 15:03:39.371322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:9911 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:06.494 [2024-12-01 15:03:39.371357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:23:06.494 [2024-12-01 15:03:39.380196] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb20e0) with pdu=0x2000190fd640 00:23:06.494 [2024-12-01 15:03:39.381390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:13763 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:06.494 [2024-12-01 15:03:39.381423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:23:06.494 [2024-12-01 15:03:39.389332] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb20e0) with pdu=0x2000190df550 00:23:06.494 [2024-12-01 15:03:39.389865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:4433 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:06.494 [2024-12-01 15:03:39.389892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:23:06.494 [2024-12-01 15:03:39.398575] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb20e0) with pdu=0x2000190e12d8 00:23:06.494 [2024-12-01 15:03:39.399644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:4784 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:06.494 [2024-12-01 15:03:39.399671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:23:06.494 [2024-12-01 15:03:39.407666] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb20e0) with pdu=0x2000190eea00 00:23:06.494 [2024-12-01 15:03:39.408680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:24477 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:06.494 [2024-12-01 15:03:39.408712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:23:06.494 [2024-12-01 15:03:39.416973] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb20e0) with pdu=0x2000190feb58 00:23:06.494 [2024-12-01 15:03:39.417308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:17714 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:06.494 [2024-12-01 15:03:39.417332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:23:06.494 [2024-12-01 15:03:39.427582] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb20e0) with pdu=0x2000190fbcf0 00:23:06.494 [2024-12-01 15:03:39.428969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:18450 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:06.494 [2024-12-01 15:03:39.429002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:23:06.494 [2024-12-01 15:03:39.435625] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb20e0) with pdu=0x2000190e49b0 00:23:06.494 [2024-12-01 15:03:39.436539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:25254 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:06.494 [2024-12-01 15:03:39.436571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:23:06.494 [2024-12-01 15:03:39.444798] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb20e0) with pdu=0x2000190f0bc0 00:23:06.494 [2024-12-01 15:03:39.445792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:9566 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:06.494 [2024-12-01 15:03:39.445822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:23:06.494 [2024-12-01 15:03:39.454040] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb20e0) with pdu=0x2000190e0ea0 00:23:06.494 [2024-12-01 15:03:39.455029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:25549 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:06.494 [2024-12-01 15:03:39.455067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:23:06.494 [2024-12-01 15:03:39.463243] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb20e0) with pdu=0x2000190e3d08 00:23:06.494 [2024-12-01 15:03:39.463971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:9349 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:06.494 [2024-12-01 15:03:39.464004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:23:06.494 [2024-12-01 15:03:39.472486] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb20e0) with pdu=0x2000190e8088 00:23:06.494 [2024-12-01 15:03:39.473380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:10371 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:06.494 [2024-12-01 15:03:39.473407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:23:06.494 [2024-12-01 15:03:39.482076] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb20e0) with pdu=0x2000190f7538 00:23:06.494 [2024-12-01 15:03:39.482234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:10785 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:06.494 [2024-12-01 15:03:39.482255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:23:06.494 [2024-12-01 15:03:39.491519] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb20e0) with pdu=0x2000190e0630 00:23:06.494 [2024-12-01 15:03:39.492377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:11879 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:06.494 [2024-12-01 15:03:39.492403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:23:06.494 [2024-12-01 15:03:39.501070] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb20e0) with pdu=0x2000190eaef0 00:23:06.494 [2024-12-01 15:03:39.502031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:5769 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:06.494 [2024-12-01 15:03:39.502064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:23:06.494 [2024-12-01 15:03:39.510419] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb20e0) with pdu=0x2000190eaef0 00:23:06.494 [2024-12-01 15:03:39.511484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:8851 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:06.494 [2024-12-01 15:03:39.511510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:23:06.494 [2024-12-01 15:03:39.519489] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb20e0) with pdu=0x2000190ea680 00:23:06.494 [2024-12-01 15:03:39.520176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:20739 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:06.494 [2024-12-01 15:03:39.520209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:23:06.494 [2024-12-01 15:03:39.529522] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb20e0) with pdu=0x2000190f96f8 00:23:06.494 [2024-12-01 15:03:39.529972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:13894 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:06.494 [2024-12-01 15:03:39.530008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:23:06.494 [2024-12-01 15:03:39.539845] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb20e0) with pdu=0x2000190fcdd0 00:23:06.494 [2024-12-01 15:03:39.540467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:14834 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:06.494 [2024-12-01 15:03:39.540498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:23:06.494 [2024-12-01 15:03:39.549212] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb20e0) with pdu=0x2000190e9e10 00:23:06.494 [2024-12-01 15:03:39.549948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:7829 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:06.494 [2024-12-01 15:03:39.549983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:23:06.494 [2024-12-01 15:03:39.558443] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb20e0) with pdu=0x2000190e4578 00:23:06.495 [2024-12-01 15:03:39.559309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:2058 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:06.495 [2024-12-01 15:03:39.559335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:23:06.495 [2024-12-01 15:03:39.567128] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb20e0) with pdu=0x2000190e0a68 00:23:06.495 [2024-12-01 15:03:39.568141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:11039 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:06.495 [2024-12-01 15:03:39.568185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:23:06.495 [2024-12-01 15:03:39.576558] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb20e0) with pdu=0x2000190f8a50 00:23:06.495 [2024-12-01 15:03:39.577441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:6271 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:06.495 [2024-12-01 15:03:39.577473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:23:06.495 [2024-12-01 15:03:39.586036] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb20e0) with pdu=0x2000190e5ec8 00:23:06.495 [2024-12-01 15:03:39.586523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:16310 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:06.495 [2024-12-01 15:03:39.586547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:23:06.495 [2024-12-01 15:03:39.595152] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb20e0) with pdu=0x2000190e73e0 00:23:06.495 [2024-12-01 15:03:39.596047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:5681 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:06.495 [2024-12-01 15:03:39.596377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:23:06.495 [2024-12-01 15:03:39.605609] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb20e0) with pdu=0x2000190f0ff8 00:23:06.495 [2024-12-01 15:03:39.606872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:17688 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:06.495 [2024-12-01 15:03:39.607061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:06.754 [2024-12-01 15:03:39.615118] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb20e0) with pdu=0x2000190f31b8 00:23:06.754 [2024-12-01 15:03:39.615434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:22500 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:06.754 [2024-12-01 15:03:39.615654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:23:06.754 [2024-12-01 15:03:39.624375] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb20e0) with pdu=0x2000190fef90 00:23:06.754 [2024-12-01 15:03:39.624882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:12437 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:06.754 [2024-12-01 15:03:39.625057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:23:06.754 [2024-12-01 15:03:39.633585] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb20e0) with pdu=0x2000190f0350 00:23:06.754 [2024-12-01 15:03:39.634121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:6499 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:06.754 [2024-12-01 15:03:39.634313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:23:06.754 [2024-12-01 15:03:39.642810] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb20e0) with pdu=0x2000190f6020 00:23:06.754 [2024-12-01 15:03:39.643075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:10950 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:06.754 [2024-12-01 15:03:39.643097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:23:06.754 [2024-12-01 15:03:39.651687] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb20e0) with pdu=0x2000190fe2e8 00:23:06.754 [2024-12-01 15:03:39.651964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:7104 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:06.754 [2024-12-01 15:03:39.651990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:23:06.754 [2024-12-01 15:03:39.662328] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb20e0) with pdu=0x2000190f31b8 00:23:06.754 [2024-12-01 15:03:39.663096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:6461 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:06.754 [2024-12-01 15:03:39.663124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:23:06.754 [2024-12-01 15:03:39.670179] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb20e0) with pdu=0x2000190f20d8 00:23:06.754 [2024-12-01 15:03:39.671301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:22647 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:06.754 [2024-12-01 15:03:39.671467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:23:06.754 [2024-12-01 15:03:39.678594] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb20e0) with pdu=0x2000190fa7d8 00:23:06.754 [2024-12-01 15:03:39.679502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:12080 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:06.754 [2024-12-01 15:03:39.679538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:23:06.755 [2024-12-01 15:03:39.687498] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb20e0) with pdu=0x2000190ed4e8 00:23:06.755 [2024-12-01 15:03:39.688657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:17318 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:06.755 [2024-12-01 15:03:39.688693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:23:06.755 [2024-12-01 15:03:39.697388] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb20e0) with pdu=0x2000190f6458 00:23:06.755 [2024-12-01 15:03:39.698268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:11566 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:06.755 [2024-12-01 15:03:39.698297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:23:06.755 [2024-12-01 15:03:39.706424] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb20e0) with pdu=0x2000190e99d8 00:23:06.755 [2024-12-01 15:03:39.707160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:18745 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:06.755 [2024-12-01 15:03:39.707195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:23:06.755 [2024-12-01 15:03:39.715211] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb20e0) with pdu=0x2000190f9b30 00:23:06.755 [2024-12-01 15:03:39.716230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:5066 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:06.755 [2024-12-01 15:03:39.716263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:23:06.755 [2024-12-01 15:03:39.724745] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb20e0) with pdu=0x2000190dece0 00:23:06.755 [2024-12-01 15:03:39.725309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:16664 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:06.755 [2024-12-01 15:03:39.725336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:23:06.755 [2024-12-01 15:03:39.733035] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb20e0) with pdu=0x2000190f81e0 00:23:06.755 [2024-12-01 15:03:39.734104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5459 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:06.755 [2024-12-01 15:03:39.734137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:23:06.755 [2024-12-01 15:03:39.741939] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb20e0) with pdu=0x2000190f1430 00:23:06.755 [2024-12-01 15:03:39.742777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:2656 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:06.755 [2024-12-01 15:03:39.742839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:23:06.755 [2024-12-01 15:03:39.750808] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb20e0) with pdu=0x2000190fa7d8 00:23:06.755 [2024-12-01 15:03:39.751728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:3706 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:06.755 [2024-12-01 15:03:39.751805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:06.755 [2024-12-01 15:03:39.759059] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb20e0) with pdu=0x2000190f7100 00:23:06.755 [2024-12-01 15:03:39.759189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:14727 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:06.755 [2024-12-01 15:03:39.759209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:23:06.755 [2024-12-01 15:03:39.769348] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb20e0) with pdu=0x2000190f31b8 00:23:06.755 [2024-12-01 15:03:39.770158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:9739 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:06.755 [2024-12-01 15:03:39.770223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:23:06.755 [2024-12-01 15:03:39.777554] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb20e0) with pdu=0x2000190f8e88 00:23:06.755 [2024-12-01 15:03:39.778238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:15767 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:06.755 [2024-12-01 15:03:39.778272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:23:06.755 [2024-12-01 15:03:39.786541] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb20e0) with pdu=0x2000190f1430 00:23:06.755 [2024-12-01 15:03:39.786949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:17302 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:06.755 [2024-12-01 15:03:39.786977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:23:06.755 [2024-12-01 15:03:39.795415] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb20e0) with pdu=0x2000190eb328 00:23:06.755 [2024-12-01 15:03:39.795787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:14420 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:06.755 [2024-12-01 15:03:39.795812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:23:06.755 [2024-12-01 15:03:39.804286] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb20e0) with pdu=0x2000190e1f80 00:23:06.755 [2024-12-01 15:03:39.804757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:291 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:06.755 [2024-12-01 15:03:39.804807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:23:06.755 [2024-12-01 15:03:39.813380] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb20e0) with pdu=0x2000190e0630 00:23:06.755 [2024-12-01 15:03:39.813721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:8567 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:06.755 [2024-12-01 15:03:39.813747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:23:06.755 [2024-12-01 15:03:39.822227] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb20e0) with pdu=0x2000190f6cc8 00:23:06.755 [2024-12-01 15:03:39.822580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:18403 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:06.755 [2024-12-01 15:03:39.822605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:23:06.755 [2024-12-01 15:03:39.831243] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb20e0) with pdu=0x2000190e6738 00:23:06.755 [2024-12-01 15:03:39.831638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:9425 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:06.755 [2024-12-01 15:03:39.831664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:23:06.755 [2024-12-01 15:03:39.840299] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb20e0) with pdu=0x2000190f7970 00:23:06.755 [2024-12-01 15:03:39.840771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:16345 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:06.755 [2024-12-01 15:03:39.840825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:06.755 [2024-12-01 15:03:39.849192] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb20e0) with pdu=0x2000190f2948 00:23:06.755 [2024-12-01 15:03:39.849741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:24457 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:06.755 [2024-12-01 15:03:39.849855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:23:06.755 [2024-12-01 15:03:39.858098] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb20e0) with pdu=0x2000190f31b8 00:23:06.755 [2024-12-01 15:03:39.858692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:19094 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:06.755 [2024-12-01 15:03:39.858723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:23:07.015 [2024-12-01 15:03:39.867682] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb20e0) with pdu=0x2000190ed0b0 00:23:07.015 [2024-12-01 15:03:39.868090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:20544 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:07.015 [2024-12-01 15:03:39.868121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:23:07.015 [2024-12-01 15:03:39.877286] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb20e0) with pdu=0x2000190fc998 00:23:07.015 [2024-12-01 15:03:39.878050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:24795 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:07.015 [2024-12-01 15:03:39.878080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:23:07.015 [2024-12-01 15:03:39.888321] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb20e0) with pdu=0x2000190eff18 00:23:07.015 [2024-12-01 15:03:39.890101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:4680 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:07.015 [2024-12-01 15:03:39.890134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:23:07.015 [2024-12-01 15:03:39.896332] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb20e0) with pdu=0x2000190e23b8 00:23:07.015 [2024-12-01 15:03:39.897417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:2901 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:07.015 [2024-12-01 15:03:39.897629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:23:07.015 [2024-12-01 15:03:39.905318] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb20e0) with pdu=0x2000190f9f68 00:23:07.015 [2024-12-01 15:03:39.906258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:2001 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:07.015 [2024-12-01 15:03:39.906294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:23:07.015 [2024-12-01 15:03:39.914290] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb20e0) with pdu=0x2000190f0ff8 00:23:07.015 [2024-12-01 15:03:39.915214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:9797 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:07.015 [2024-12-01 15:03:39.915248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:23:07.015 [2024-12-01 15:03:39.923212] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb20e0) with pdu=0x2000190fa3a0 00:23:07.015 [2024-12-01 15:03:39.924311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:17180 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:07.015 [2024-12-01 15:03:39.924472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:23:07.015 [2024-12-01 15:03:39.933088] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb20e0) with pdu=0x2000190feb58 00:23:07.015 [2024-12-01 15:03:39.934641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:13365 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:07.015 [2024-12-01 15:03:39.934677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:23:07.015 [2024-12-01 15:03:39.942888] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb20e0) with pdu=0x2000190f1ca0 00:23:07.015 [2024-12-01 15:03:39.943731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:17063 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:07.015 [2024-12-01 15:03:39.943925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:23:07.015 [2024-12-01 15:03:39.949681] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb20e0) with pdu=0x2000190ddc00 00:23:07.015 [2024-12-01 15:03:39.949837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:8101 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:07.015 [2024-12-01 15:03:39.949858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:23:07.015 [2024-12-01 15:03:39.959619] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb20e0) with pdu=0x2000190ebfd0 00:23:07.015 [2024-12-01 15:03:39.960347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:24671 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:07.015 [2024-12-01 15:03:39.960384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:23:07.015 [2024-12-01 15:03:39.968405] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb20e0) with pdu=0x2000190fbcf0 00:23:07.015 [2024-12-01 15:03:39.968655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:17340 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:07.015 [2024-12-01 15:03:39.968678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:23:07.015 [2024-12-01 15:03:39.977574] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb20e0) with pdu=0x2000190feb58 00:23:07.015 [2024-12-01 15:03:39.978599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:18387 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:07.015 [2024-12-01 15:03:39.978631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:23:07.015 [2024-12-01 15:03:39.985883] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb20e0) with pdu=0x2000190e88f8 00:23:07.015 [2024-12-01 15:03:39.986015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:24569 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:07.015 [2024-12-01 15:03:39.986036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:23:07.015 [2024-12-01 15:03:39.997911] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb20e0) with pdu=0x2000190fcdd0 00:23:07.015 [2024-12-01 15:03:39.998555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:14171 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:07.015 [2024-12-01 15:03:39.998733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:23:07.015 [2024-12-01 15:03:40.010043] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb20e0) with pdu=0x2000190e4de8 00:23:07.015 [2024-12-01 15:03:40.011109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:8929 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:07.015 [2024-12-01 15:03:40.011174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.015 [2024-12-01 15:03:40.017216] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb20e0) with pdu=0x2000190eaef0 00:23:07.015 [2024-12-01 15:03:40.017736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:18518 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:07.015 [2024-12-01 15:03:40.017802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:23:07.016 [2024-12-01 15:03:40.027868] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb20e0) with pdu=0x2000190ee190 00:23:07.016 [2024-12-01 15:03:40.028658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:22447 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:07.016 [2024-12-01 15:03:40.028689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:23:07.016 [2024-12-01 15:03:40.037890] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb20e0) with pdu=0x2000190e8088 00:23:07.016 [2024-12-01 15:03:40.039023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:7754 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:07.016 [2024-12-01 15:03:40.039074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:07.016 [2024-12-01 15:03:40.045948] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb20e0) with pdu=0x2000190eb760 00:23:07.016 [2024-12-01 15:03:40.047051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:2823 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:07.016 [2024-12-01 15:03:40.047100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:23:07.016 [2024-12-01 15:03:40.055040] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb20e0) with pdu=0x2000190e73e0 00:23:07.016 [2024-12-01 15:03:40.055998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:5409 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:07.016 [2024-12-01 15:03:40.056059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:23:07.016 [2024-12-01 15:03:40.065290] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb20e0) with pdu=0x2000190ea248 00:23:07.016 [2024-12-01 15:03:40.065513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:19107 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:07.016 [2024-12-01 15:03:40.065541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:23:07.016 [2024-12-01 15:03:40.075127] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb20e0) with pdu=0x2000190e7c50 00:23:07.016 [2024-12-01 15:03:40.076122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:22100 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:07.016 [2024-12-01 15:03:40.076156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.016 [2024-12-01 15:03:40.083812] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb20e0) with pdu=0x2000190f4b08 00:23:07.016 [2024-12-01 15:03:40.085015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:23556 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:07.016 [2024-12-01 15:03:40.085046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:23:07.016 [2024-12-01 15:03:40.092955] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb20e0) with pdu=0x2000190f92c0 00:23:07.016 [2024-12-01 15:03:40.093682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:2327 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:07.016 [2024-12-01 15:03:40.093905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:23:07.016 [2024-12-01 15:03:40.101766] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb20e0) with pdu=0x2000190f2948 00:23:07.016 [2024-12-01 15:03:40.103059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:7715 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:07.016 [2024-12-01 15:03:40.103095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:23:07.016 [2024-12-01 15:03:40.110452] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb20e0) with pdu=0x2000190e3d08 00:23:07.016 [2024-12-01 15:03:40.111412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:15762 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:07.016 [2024-12-01 15:03:40.111447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:23:07.016 [2024-12-01 15:03:40.119887] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb20e0) with pdu=0x2000190e27f0 00:23:07.016 [2024-12-01 15:03:40.120616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:3615 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:07.016 [2024-12-01 15:03:40.120829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:23:07.275 [2024-12-01 15:03:40.130405] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb20e0) with pdu=0x2000190f81e0 00:23:07.275 [2024-12-01 15:03:40.131227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:12041 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:07.275 [2024-12-01 15:03:40.131264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:23:07.275 [2024-12-01 15:03:40.139524] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb20e0) with pdu=0x2000190f96f8 00:23:07.275 [2024-12-01 15:03:40.140397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:1773 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:07.275 [2024-12-01 15:03:40.140426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:23:07.275 [2024-12-01 15:03:40.148410] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb20e0) with pdu=0x2000190ed920 00:23:07.275 [2024-12-01 15:03:40.149977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:12559 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:07.275 [2024-12-01 15:03:40.150011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:23:07.275 [2024-12-01 15:03:40.157132] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb20e0) with pdu=0x2000190fbcf0 00:23:07.275 [2024-12-01 15:03:40.158333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:23564 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:07.275 [2024-12-01 15:03:40.158366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:23:07.275 [2024-12-01 15:03:40.166457] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb20e0) with pdu=0x2000190f3e60 00:23:07.275 [2024-12-01 15:03:40.167498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:3000 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:07.275 [2024-12-01 15:03:40.167526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:23:07.275 [2024-12-01 15:03:40.175419] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb20e0) with pdu=0x2000190f0788 00:23:07.275 [2024-12-01 15:03:40.176119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:7983 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:07.275 [2024-12-01 15:03:40.176154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:23:07.275 [2024-12-01 15:03:40.184289] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb20e0) with pdu=0x2000190e8d30 00:23:07.275 [2024-12-01 15:03:40.184965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:2758 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:07.275 [2024-12-01 15:03:40.185001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:23:07.275 [2024-12-01 15:03:40.193199] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb20e0) with pdu=0x2000190f5be8 00:23:07.275 [2024-12-01 15:03:40.194034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:2288 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:07.275 [2024-12-01 15:03:40.194095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:23:07.275 [2024-12-01 15:03:40.202301] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb20e0) with pdu=0x2000190ed920 00:23:07.275 [2024-12-01 15:03:40.202947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:17367 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:07.275 [2024-12-01 15:03:40.203098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:23:07.275 [2024-12-01 15:03:40.210545] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb20e0) with pdu=0x2000190fd208 00:23:07.275 [2024-12-01 15:03:40.212191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:2180 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:07.275 [2024-12-01 15:03:40.212225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:23:07.275 [2024-12-01 15:03:40.219544] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb20e0) with pdu=0x2000190e3d08 00:23:07.275 [2024-12-01 15:03:40.221227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:16285 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:07.275 [2024-12-01 15:03:40.221260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:23:07.275 [2024-12-01 15:03:40.228559] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb20e0) with pdu=0x2000190f20d8 00:23:07.275 [2024-12-01 15:03:40.230447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:3683 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:07.275 [2024-12-01 15:03:40.230632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:23:07.275 [2024-12-01 15:03:40.238455] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb20e0) with pdu=0x2000190e6fa8 00:23:07.275 [2024-12-01 15:03:40.240074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:24069 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:07.275 [2024-12-01 15:03:40.240289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:07.275 [2024-12-01 15:03:40.247352] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb20e0) with pdu=0x2000190dece0 00:23:07.275 [2024-12-01 15:03:40.248636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:7677 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:07.275 [2024-12-01 15:03:40.248866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:23:07.275 [2024-12-01 15:03:40.257018] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb20e0) with pdu=0x2000190eb760 00:23:07.275 [2024-12-01 15:03:40.258217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:24942 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:07.275 [2024-12-01 15:03:40.258406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:23:07.275 [2024-12-01 15:03:40.266275] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb20e0) with pdu=0x2000190f7538 00:23:07.275 [2024-12-01 15:03:40.267139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:21096 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:07.275 [2024-12-01 15:03:40.267319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:23:07.275 [2024-12-01 15:03:40.274436] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb20e0) with pdu=0x2000190e1710 00:23:07.275 [2024-12-01 15:03:40.274854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:11478 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:07.275 [2024-12-01 15:03:40.275113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:23:07.275 [2024-12-01 15:03:40.284569] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb20e0) with pdu=0x2000190de038 00:23:07.275 [2024-12-01 15:03:40.285807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:13845 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:07.275 [2024-12-01 15:03:40.285875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:23:07.275 [2024-12-01 15:03:40.294142] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb20e0) with pdu=0x2000190ed920 00:23:07.275 [2024-12-01 15:03:40.294501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:2310 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:07.275 [2024-12-01 15:03:40.294526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:23:07.275 [2024-12-01 15:03:40.303135] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb20e0) with pdu=0x2000190f9b30 00:23:07.275 [2024-12-01 15:03:40.303681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:374 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:07.275 [2024-12-01 15:03:40.303717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:23:07.275 [2024-12-01 15:03:40.312081] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb20e0) with pdu=0x2000190e4578 00:23:07.275 [2024-12-01 15:03:40.312600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:18633 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:07.275 [2024-12-01 15:03:40.312636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:23:07.275 [2024-12-01 15:03:40.320994] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb20e0) with pdu=0x2000190e3060 00:23:07.275 [2024-12-01 15:03:40.321613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20666 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:07.275 [2024-12-01 15:03:40.321676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:23:07.275 [2024-12-01 15:03:40.330156] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb20e0) with pdu=0x2000190dece0 00:23:07.275 [2024-12-01 15:03:40.330592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:2074 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:07.275 [2024-12-01 15:03:40.330650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:23:07.275 [2024-12-01 15:03:40.339330] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb20e0) with pdu=0x2000190fb480 00:23:07.275 [2024-12-01 15:03:40.340602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:19570 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:07.275 [2024-12-01 15:03:40.340635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:23:07.276 [2024-12-01 15:03:40.348456] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb20e0) with pdu=0x2000190ee190 00:23:07.276 [2024-12-01 15:03:40.349063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:9810 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:07.276 [2024-12-01 15:03:40.349093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:23:07.276 [2024-12-01 15:03:40.357631] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb20e0) with pdu=0x2000190f4b08 00:23:07.276 [2024-12-01 15:03:40.358124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:7498 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:07.276 [2024-12-01 15:03:40.358151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:23:07.276 [2024-12-01 15:03:40.366628] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb20e0) with pdu=0x2000190f5be8 00:23:07.276 [2024-12-01 15:03:40.367308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:12119 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:07.276 [2024-12-01 15:03:40.367460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:23:07.276 [2024-12-01 15:03:40.374493] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb20e0) with pdu=0x2000190e1f80 00:23:07.276 [2024-12-01 15:03:40.374644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:14002 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:07.276 [2024-12-01 15:03:40.374670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:23:07.276 [2024-12-01 15:03:40.385935] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb20e0) with pdu=0x2000190e84c0 00:23:07.276 [2024-12-01 15:03:40.386727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:19409 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:07.276 [2024-12-01 15:03:40.386778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:23:07.534 [2024-12-01 15:03:40.395794] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb20e0) with pdu=0x2000190f3e60 00:23:07.534 [2024-12-01 15:03:40.397061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:21650 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:07.534 [2024-12-01 15:03:40.397106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:23:07.534 [2024-12-01 15:03:40.404745] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb20e0) with pdu=0x2000190ef270 00:23:07.534 [2024-12-01 15:03:40.405354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:1109 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:07.534 [2024-12-01 15:03:40.405380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:23:07.534 [2024-12-01 15:03:40.413908] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb20e0) with pdu=0x2000190e3060 00:23:07.534 [2024-12-01 15:03:40.415462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:3066 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:07.534 [2024-12-01 15:03:40.415659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:23:07.534 [2024-12-01 15:03:40.422826] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb20e0) with pdu=0x2000190ddc00 00:23:07.534 [2024-12-01 15:03:40.424102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:20376 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:07.534 [2024-12-01 15:03:40.424304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:23:07.534 [2024-12-01 15:03:40.432083] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb20e0) with pdu=0x2000190fe2e8 00:23:07.534 [2024-12-01 15:03:40.433032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:3792 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:07.534 [2024-12-01 15:03:40.433221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:23:07.534 [2024-12-01 15:03:40.441282] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb20e0) with pdu=0x2000190e84c0 00:23:07.534 [2024-12-01 15:03:40.442061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:13213 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:07.534 [2024-12-01 15:03:40.442088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:23:07.534 [2024-12-01 15:03:40.450410] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb20e0) with pdu=0x2000190f4f40 00:23:07.534 [2024-12-01 15:03:40.451023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:18048 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:07.534 [2024-12-01 15:03:40.451051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:23:07.534 [2024-12-01 15:03:40.459301] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb20e0) with pdu=0x2000190fcdd0 00:23:07.534 [2024-12-01 15:03:40.459812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:21991 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:07.534 [2024-12-01 15:03:40.459838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:23:07.534 [2024-12-01 15:03:40.468108] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb20e0) with pdu=0x2000190ddc00 00:23:07.534 [2024-12-01 15:03:40.469169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:25008 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:07.534 [2024-12-01 15:03:40.469196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:23:07.534 [2024-12-01 15:03:40.477070] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb20e0) with pdu=0x2000190e4de8 00:23:07.534 [2024-12-01 15:03:40.478213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:19113 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:07.534 [2024-12-01 15:03:40.478354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:23:07.534 00:23:07.534 Latency(us) 00:23:07.534 [2024-12-01T15:03:40.649Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:07.534 [2024-12-01T15:03:40.649Z] Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:23:07.534 nvme0n1 : 2.00 27290.57 106.60 0.00 0.00 4684.78 1876.71 11617.75 00:23:07.534 [2024-12-01T15:03:40.649Z] =================================================================================================================== 00:23:07.534 [2024-12-01T15:03:40.649Z] Total : 27290.57 106.60 0.00 0.00 4684.78 1876.71 11617.75 00:23:07.534 0 00:23:07.534 15:03:40 -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:23:07.534 15:03:40 -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:23:07.534 15:03:40 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:23:07.534 15:03:40 -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:23:07.534 | .driver_specific 00:23:07.534 | .nvme_error 00:23:07.534 | .status_code 00:23:07.534 | .command_transient_transport_error' 00:23:07.792 15:03:40 -- host/digest.sh@71 -- # (( 214 > 0 )) 00:23:07.792 15:03:40 -- host/digest.sh@73 -- # killprocess 97981 00:23:07.792 15:03:40 -- common/autotest_common.sh@936 -- # '[' -z 97981 ']' 00:23:07.792 15:03:40 -- common/autotest_common.sh@940 -- # kill -0 97981 00:23:07.792 15:03:40 -- common/autotest_common.sh@941 -- # uname 00:23:07.792 15:03:40 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:23:07.792 15:03:40 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 97981 00:23:07.792 killing process with pid 97981 00:23:07.792 Received shutdown signal, test time was about 2.000000 seconds 00:23:07.792 00:23:07.792 Latency(us) 00:23:07.792 [2024-12-01T15:03:40.907Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:07.792 [2024-12-01T15:03:40.907Z] =================================================================================================================== 00:23:07.792 [2024-12-01T15:03:40.907Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:07.792 15:03:40 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:23:07.792 15:03:40 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:23:07.792 15:03:40 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 97981' 00:23:07.792 15:03:40 -- common/autotest_common.sh@955 -- # kill 97981 00:23:07.792 15:03:40 -- common/autotest_common.sh@960 -- # wait 97981 00:23:08.050 15:03:41 -- host/digest.sh@114 -- # run_bperf_err randwrite 131072 16 00:23:08.050 15:03:41 -- host/digest.sh@54 -- # local rw bs qd 00:23:08.050 15:03:41 -- host/digest.sh@56 -- # rw=randwrite 00:23:08.050 15:03:41 -- host/digest.sh@56 -- # bs=131072 00:23:08.050 15:03:41 -- host/digest.sh@56 -- # qd=16 00:23:08.050 15:03:41 -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z 00:23:08.050 15:03:41 -- host/digest.sh@58 -- # bperfpid=98073 00:23:08.050 15:03:41 -- host/digest.sh@60 -- # waitforlisten 98073 /var/tmp/bperf.sock 00:23:08.050 15:03:41 -- common/autotest_common.sh@829 -- # '[' -z 98073 ']' 00:23:08.050 15:03:41 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:23:08.050 15:03:41 -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:08.050 15:03:41 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:23:08.050 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:23:08.050 15:03:41 -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:08.050 15:03:41 -- common/autotest_common.sh@10 -- # set +x 00:23:08.050 [2024-12-01 15:03:41.127301] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:23:08.050 [2024-12-01 15:03:41.127571] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid98073 ] 00:23:08.050 I/O size of 131072 is greater than zero copy threshold (65536). 00:23:08.050 Zero copy mechanism will not be used. 00:23:08.308 [2024-12-01 15:03:41.259587] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:08.308 [2024-12-01 15:03:41.334126] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:23:09.244 15:03:42 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:09.244 15:03:42 -- common/autotest_common.sh@862 -- # return 0 00:23:09.244 15:03:42 -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:23:09.244 15:03:42 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:23:09.244 15:03:42 -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:23:09.244 15:03:42 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:09.244 15:03:42 -- common/autotest_common.sh@10 -- # set +x 00:23:09.502 15:03:42 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:09.502 15:03:42 -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:23:09.502 15:03:42 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:23:09.761 nvme0n1 00:23:09.761 15:03:42 -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:23:09.761 15:03:42 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:09.761 15:03:42 -- common/autotest_common.sh@10 -- # set +x 00:23:09.761 15:03:42 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:09.762 15:03:42 -- host/digest.sh@69 -- # bperf_py perform_tests 00:23:09.762 15:03:42 -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:23:09.762 I/O size of 131072 is greater than zero copy threshold (65536). 00:23:09.762 Zero copy mechanism will not be used. 00:23:09.762 Running I/O for 2 seconds... 00:23:09.762 [2024-12-01 15:03:42.741199] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb2420) with pdu=0x2000190fef90 00:23:09.762 [2024-12-01 15:03:42.741677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.762 [2024-12-01 15:03:42.741724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:09.762 [2024-12-01 15:03:42.745629] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb2420) with pdu=0x2000190fef90 00:23:09.762 [2024-12-01 15:03:42.745975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.762 [2024-12-01 15:03:42.746001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:09.762 [2024-12-01 15:03:42.749893] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb2420) with pdu=0x2000190fef90 00:23:09.762 [2024-12-01 15:03:42.750023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.762 [2024-12-01 15:03:42.750047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:09.762 [2024-12-01 15:03:42.754166] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb2420) with pdu=0x2000190fef90 00:23:09.762 [2024-12-01 15:03:42.754293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.762 [2024-12-01 15:03:42.754333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:09.762 [2024-12-01 15:03:42.758360] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb2420) with pdu=0x2000190fef90 00:23:09.762 [2024-12-01 15:03:42.758458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.762 [2024-12-01 15:03:42.758480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:09.762 [2024-12-01 15:03:42.762725] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb2420) with pdu=0x2000190fef90 00:23:09.762 [2024-12-01 15:03:42.762857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.762 [2024-12-01 15:03:42.762880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:09.762 [2024-12-01 15:03:42.767077] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb2420) with pdu=0x2000190fef90 00:23:09.762 [2024-12-01 15:03:42.767333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.762 [2024-12-01 15:03:42.767366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:09.762 [2024-12-01 15:03:42.771488] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb2420) with pdu=0x2000190fef90 00:23:09.762 [2024-12-01 15:03:42.771709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.762 [2024-12-01 15:03:42.771733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:09.762 [2024-12-01 15:03:42.775727] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb2420) with pdu=0x2000190fef90 00:23:09.762 [2024-12-01 15:03:42.776075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.762 [2024-12-01 15:03:42.776112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:09.762 [2024-12-01 15:03:42.779929] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb2420) with pdu=0x2000190fef90 00:23:09.762 [2024-12-01 15:03:42.780052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.762 [2024-12-01 15:03:42.780076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:09.762 [2024-12-01 15:03:42.784100] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb2420) with pdu=0x2000190fef90 00:23:09.762 [2024-12-01 15:03:42.784195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.762 [2024-12-01 15:03:42.784218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:09.762 [2024-12-01 15:03:42.788301] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb2420) with pdu=0x2000190fef90 00:23:09.762 [2024-12-01 15:03:42.788397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.762 [2024-12-01 15:03:42.788419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:09.762 [2024-12-01 15:03:42.792444] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb2420) with pdu=0x2000190fef90 00:23:09.762 [2024-12-01 15:03:42.792547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.762 [2024-12-01 15:03:42.792569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:09.762 [2024-12-01 15:03:42.796684] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb2420) with pdu=0x2000190fef90 00:23:09.762 [2024-12-01 15:03:42.796866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.762 [2024-12-01 15:03:42.796888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:09.762 [2024-12-01 15:03:42.800897] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb2420) with pdu=0x2000190fef90 00:23:09.762 [2024-12-01 15:03:42.801214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.762 [2024-12-01 15:03:42.801252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:09.762 [2024-12-01 15:03:42.804967] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb2420) with pdu=0x2000190fef90 00:23:09.762 [2024-12-01 15:03:42.805095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.762 [2024-12-01 15:03:42.805116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:09.762 [2024-12-01 15:03:42.809216] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb2420) with pdu=0x2000190fef90 00:23:09.762 [2024-12-01 15:03:42.809384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.762 [2024-12-01 15:03:42.809406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:09.762 [2024-12-01 15:03:42.813270] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb2420) with pdu=0x2000190fef90 00:23:09.762 [2024-12-01 15:03:42.813592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.762 [2024-12-01 15:03:42.813625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:09.762 [2024-12-01 15:03:42.817336] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb2420) with pdu=0x2000190fef90 00:23:09.762 [2024-12-01 15:03:42.817471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.762 [2024-12-01 15:03:42.817495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:09.762 [2024-12-01 15:03:42.821595] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb2420) with pdu=0x2000190fef90 00:23:09.762 [2024-12-01 15:03:42.821743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.762 [2024-12-01 15:03:42.821781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:09.762 [2024-12-01 15:03:42.825678] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb2420) with pdu=0x2000190fef90 00:23:09.762 [2024-12-01 15:03:42.825795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.762 [2024-12-01 15:03:42.825817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:09.762 [2024-12-01 15:03:42.829889] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb2420) with pdu=0x2000190fef90 00:23:09.762 [2024-12-01 15:03:42.830095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.762 [2024-12-01 15:03:42.830133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:09.762 [2024-12-01 15:03:42.834100] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb2420) with pdu=0x2000190fef90 00:23:09.762 [2024-12-01 15:03:42.834282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.762 [2024-12-01 15:03:42.834306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:09.762 [2024-12-01 15:03:42.838158] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb2420) with pdu=0x2000190fef90 00:23:09.762 [2024-12-01 15:03:42.838263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.763 [2024-12-01 15:03:42.838285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:09.763 [2024-12-01 15:03:42.842322] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb2420) with pdu=0x2000190fef90 00:23:09.763 [2024-12-01 15:03:42.842501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.763 [2024-12-01 15:03:42.842522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:09.763 [2024-12-01 15:03:42.846470] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb2420) with pdu=0x2000190fef90 00:23:09.763 [2024-12-01 15:03:42.846779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.763 [2024-12-01 15:03:42.846824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:09.763 [2024-12-01 15:03:42.850548] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb2420) with pdu=0x2000190fef90 00:23:09.763 [2024-12-01 15:03:42.850681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.763 [2024-12-01 15:03:42.850703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:09.763 [2024-12-01 15:03:42.854796] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb2420) with pdu=0x2000190fef90 00:23:09.763 [2024-12-01 15:03:42.855010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.763 [2024-12-01 15:03:42.855032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:09.763 [2024-12-01 15:03:42.858982] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb2420) with pdu=0x2000190fef90 00:23:09.763 [2024-12-01 15:03:42.859106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.763 [2024-12-01 15:03:42.859127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:09.763 [2024-12-01 15:03:42.863184] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb2420) with pdu=0x2000190fef90 00:23:09.763 [2024-12-01 15:03:42.863342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.763 [2024-12-01 15:03:42.863364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:09.763 [2024-12-01 15:03:42.867352] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb2420) with pdu=0x2000190fef90 00:23:09.763 [2024-12-01 15:03:42.867512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.763 [2024-12-01 15:03:42.867535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:09.763 [2024-12-01 15:03:42.871687] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb2420) with pdu=0x2000190fef90 00:23:09.763 [2024-12-01 15:03:42.871845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.763 [2024-12-01 15:03:42.871866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:10.023 [2024-12-01 15:03:42.876503] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb2420) with pdu=0x2000190fef90 00:23:10.023 [2024-12-01 15:03:42.876704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.023 [2024-12-01 15:03:42.876727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:10.023 [2024-12-01 15:03:42.881062] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb2420) with pdu=0x2000190fef90 00:23:10.023 [2024-12-01 15:03:42.881359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.023 [2024-12-01 15:03:42.881393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:10.023 [2024-12-01 15:03:42.885252] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb2420) with pdu=0x2000190fef90 00:23:10.023 [2024-12-01 15:03:42.885513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.023 [2024-12-01 15:03:42.885537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:10.023 [2024-12-01 15:03:42.889508] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb2420) with pdu=0x2000190fef90 00:23:10.023 [2024-12-01 15:03:42.889657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.023 [2024-12-01 15:03:42.889679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:10.023 [2024-12-01 15:03:42.893847] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb2420) with pdu=0x2000190fef90 00:23:10.023 [2024-12-01 15:03:42.893985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.023 [2024-12-01 15:03:42.894009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:10.023 [2024-12-01 15:03:42.898047] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb2420) with pdu=0x2000190fef90 00:23:10.023 [2024-12-01 15:03:42.898259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.023 [2024-12-01 15:03:42.898281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:10.023 [2024-12-01 15:03:42.902152] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb2420) with pdu=0x2000190fef90 00:23:10.023 [2024-12-01 15:03:42.902315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.023 [2024-12-01 15:03:42.902336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:10.023 [2024-12-01 15:03:42.906369] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb2420) with pdu=0x2000190fef90 00:23:10.023 [2024-12-01 15:03:42.906496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.023 [2024-12-01 15:03:42.906517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:10.023 [2024-12-01 15:03:42.910691] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb2420) with pdu=0x2000190fef90 00:23:10.023 [2024-12-01 15:03:42.910899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.023 [2024-12-01 15:03:42.910921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:10.023 [2024-12-01 15:03:42.914914] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb2420) with pdu=0x2000190fef90 00:23:10.023 [2024-12-01 15:03:42.915113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.024 [2024-12-01 15:03:42.915135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:10.024 [2024-12-01 15:03:42.919147] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb2420) with pdu=0x2000190fef90 00:23:10.024 [2024-12-01 15:03:42.919306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.024 [2024-12-01 15:03:42.919328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:10.024 [2024-12-01 15:03:42.923341] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb2420) with pdu=0x2000190fef90 00:23:10.024 [2024-12-01 15:03:42.923501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.024 [2024-12-01 15:03:42.923523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:10.024 [2024-12-01 15:03:42.927549] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb2420) with pdu=0x2000190fef90 00:23:10.024 [2024-12-01 15:03:42.927682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.024 [2024-12-01 15:03:42.927703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:10.024 [2024-12-01 15:03:42.931712] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb2420) with pdu=0x2000190fef90 00:23:10.024 [2024-12-01 15:03:42.931914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.024 [2024-12-01 15:03:42.931936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:10.024 [2024-12-01 15:03:42.935871] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb2420) with pdu=0x2000190fef90 00:23:10.024 [2024-12-01 15:03:42.936013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.024 [2024-12-01 15:03:42.936036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:10.024 [2024-12-01 15:03:42.939982] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb2420) with pdu=0x2000190fef90 00:23:10.024 [2024-12-01 15:03:42.940080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.024 [2024-12-01 15:03:42.940101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:10.024 [2024-12-01 15:03:42.944136] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb2420) with pdu=0x2000190fef90 00:23:10.024 [2024-12-01 15:03:42.944331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.024 [2024-12-01 15:03:42.944353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:10.024 [2024-12-01 15:03:42.948387] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb2420) with pdu=0x2000190fef90 00:23:10.024 [2024-12-01 15:03:42.948638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.024 [2024-12-01 15:03:42.948709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:10.024 [2024-12-01 15:03:42.952487] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb2420) with pdu=0x2000190fef90 00:23:10.024 [2024-12-01 15:03:42.952598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.024 [2024-12-01 15:03:42.952621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:10.024 [2024-12-01 15:03:42.956716] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb2420) with pdu=0x2000190fef90 00:23:10.024 [2024-12-01 15:03:42.956914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.024 [2024-12-01 15:03:42.956936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:10.024 [2024-12-01 15:03:42.960831] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb2420) with pdu=0x2000190fef90 00:23:10.024 [2024-12-01 15:03:42.961168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.024 [2024-12-01 15:03:42.961207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:10.024 [2024-12-01 15:03:42.964893] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb2420) with pdu=0x2000190fef90 00:23:10.024 [2024-12-01 15:03:42.965117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.024 [2024-12-01 15:03:42.965138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:10.024 [2024-12-01 15:03:42.969177] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb2420) with pdu=0x2000190fef90 00:23:10.024 [2024-12-01 15:03:42.969295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.024 [2024-12-01 15:03:42.969317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:10.024 [2024-12-01 15:03:42.973266] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb2420) with pdu=0x2000190fef90 00:23:10.024 [2024-12-01 15:03:42.973394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.024 [2024-12-01 15:03:42.973414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:10.024 [2024-12-01 15:03:42.977586] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb2420) with pdu=0x2000190fef90 00:23:10.024 [2024-12-01 15:03:42.977669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.024 [2024-12-01 15:03:42.977692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:10.024 [2024-12-01 15:03:42.981808] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb2420) with pdu=0x2000190fef90 00:23:10.024 [2024-12-01 15:03:42.981925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.024 [2024-12-01 15:03:42.981946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:10.024 [2024-12-01 15:03:42.985823] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb2420) with pdu=0x2000190fef90 00:23:10.024 [2024-12-01 15:03:42.985928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.024 [2024-12-01 15:03:42.985950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:10.024 [2024-12-01 15:03:42.989954] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb2420) with pdu=0x2000190fef90 00:23:10.024 [2024-12-01 15:03:42.990146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.024 [2024-12-01 15:03:42.990167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:10.024 [2024-12-01 15:03:42.994028] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb2420) with pdu=0x2000190fef90 00:23:10.024 [2024-12-01 15:03:42.994338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.024 [2024-12-01 15:03:42.994375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:10.024 [2024-12-01 15:03:42.998172] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb2420) with pdu=0x2000190fef90 00:23:10.024 [2024-12-01 15:03:42.998296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.024 [2024-12-01 15:03:42.998317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:10.024 [2024-12-01 15:03:43.002406] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb2420) with pdu=0x2000190fef90 00:23:10.024 [2024-12-01 15:03:43.002531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.024 [2024-12-01 15:03:43.002551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:10.024 [2024-12-01 15:03:43.006606] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb2420) with pdu=0x2000190fef90 00:23:10.024 [2024-12-01 15:03:43.006700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.024 [2024-12-01 15:03:43.006721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:10.024 [2024-12-01 15:03:43.010765] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb2420) with pdu=0x2000190fef90 00:23:10.024 [2024-12-01 15:03:43.010930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.024 [2024-12-01 15:03:43.010953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:10.024 [2024-12-01 15:03:43.014955] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb2420) with pdu=0x2000190fef90 00:23:10.024 [2024-12-01 15:03:43.015072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.024 [2024-12-01 15:03:43.015093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:10.024 [2024-12-01 15:03:43.019075] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb2420) with pdu=0x2000190fef90 00:23:10.024 [2024-12-01 15:03:43.019171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.024 [2024-12-01 15:03:43.019191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:10.024 [2024-12-01 15:03:43.023321] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb2420) with pdu=0x2000190fef90 00:23:10.024 [2024-12-01 15:03:43.023489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.024 [2024-12-01 15:03:43.023510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:10.024 [2024-12-01 15:03:43.027468] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb2420) with pdu=0x2000190fef90 00:23:10.025 [2024-12-01 15:03:43.027736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.025 [2024-12-01 15:03:43.027793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:10.025 [2024-12-01 15:03:43.031544] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb2420) with pdu=0x2000190fef90 00:23:10.025 [2024-12-01 15:03:43.031724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.025 [2024-12-01 15:03:43.031745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:10.025 [2024-12-01 15:03:43.035805] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb2420) with pdu=0x2000190fef90 00:23:10.025 [2024-12-01 15:03:43.035956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.025 [2024-12-01 15:03:43.035977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:10.025 [2024-12-01 15:03:43.039886] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb2420) with pdu=0x2000190fef90 00:23:10.025 [2024-12-01 15:03:43.040000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.025 [2024-12-01 15:03:43.040021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:10.025 [2024-12-01 15:03:43.044059] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb2420) with pdu=0x2000190fef90 00:23:10.025 [2024-12-01 15:03:43.044276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.025 [2024-12-01 15:03:43.044298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:10.025 [2024-12-01 15:03:43.048355] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb2420) with pdu=0x2000190fef90 00:23:10.025 [2024-12-01 15:03:43.048479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.025 [2024-12-01 15:03:43.048500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:10.025 [2024-12-01 15:03:43.052485] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb2420) with pdu=0x2000190fef90 00:23:10.025 [2024-12-01 15:03:43.052576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.025 [2024-12-01 15:03:43.052597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:10.025 [2024-12-01 15:03:43.056819] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb2420) with pdu=0x2000190fef90 00:23:10.025 [2024-12-01 15:03:43.057018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.025 [2024-12-01 15:03:43.057053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:10.025 [2024-12-01 15:03:43.061230] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb2420) with pdu=0x2000190fef90 00:23:10.025 [2024-12-01 15:03:43.061474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.025 [2024-12-01 15:03:43.061507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:10.025 [2024-12-01 15:03:43.065429] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb2420) with pdu=0x2000190fef90 00:23:10.025 [2024-12-01 15:03:43.065535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.025 [2024-12-01 15:03:43.065557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:10.025 [2024-12-01 15:03:43.069749] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb2420) with pdu=0x2000190fef90 00:23:10.025 [2024-12-01 15:03:43.069982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.025 [2024-12-01 15:03:43.070022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:10.025 [2024-12-01 15:03:43.073979] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb2420) with pdu=0x2000190fef90 00:23:10.025 [2024-12-01 15:03:43.074097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.025 [2024-12-01 15:03:43.074149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:10.025 [2024-12-01 15:03:43.078247] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb2420) with pdu=0x2000190fef90 00:23:10.025 [2024-12-01 15:03:43.078391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.025 [2024-12-01 15:03:43.078411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:10.025 [2024-12-01 15:03:43.082463] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb2420) with pdu=0x2000190fef90 00:23:10.025 [2024-12-01 15:03:43.082584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.025 [2024-12-01 15:03:43.082605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:10.025 [2024-12-01 15:03:43.086610] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb2420) with pdu=0x2000190fef90 00:23:10.025 [2024-12-01 15:03:43.086715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.025 [2024-12-01 15:03:43.086737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:10.025 [2024-12-01 15:03:43.090806] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb2420) with pdu=0x2000190fef90 00:23:10.025 [2024-12-01 15:03:43.090990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.025 [2024-12-01 15:03:43.091011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:10.025 [2024-12-01 15:03:43.094924] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb2420) with pdu=0x2000190fef90 00:23:10.025 [2024-12-01 15:03:43.095246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.025 [2024-12-01 15:03:43.095285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:10.025 [2024-12-01 15:03:43.098969] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb2420) with pdu=0x2000190fef90 00:23:10.025 [2024-12-01 15:03:43.099096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.025 [2024-12-01 15:03:43.099117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:10.025 [2024-12-01 15:03:43.103234] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb2420) with pdu=0x2000190fef90 00:23:10.025 [2024-12-01 15:03:43.103405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.025 [2024-12-01 15:03:43.103425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:10.025 [2024-12-01 15:03:43.107326] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb2420) with pdu=0x2000190fef90 00:23:10.025 [2024-12-01 15:03:43.107453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.025 [2024-12-01 15:03:43.107473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:10.025 [2024-12-01 15:03:43.111436] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb2420) with pdu=0x2000190fef90 00:23:10.025 [2024-12-01 15:03:43.111581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.025 [2024-12-01 15:03:43.111601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:10.025 [2024-12-01 15:03:43.115548] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb2420) with pdu=0x2000190fef90 00:23:10.025 [2024-12-01 15:03:43.115693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.025 [2024-12-01 15:03:43.115715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:10.025 [2024-12-01 15:03:43.119656] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb2420) with pdu=0x2000190fef90 00:23:10.025 [2024-12-01 15:03:43.119754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.025 [2024-12-01 15:03:43.119788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:10.025 [2024-12-01 15:03:43.123858] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb2420) with pdu=0x2000190fef90 00:23:10.025 [2024-12-01 15:03:43.124025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.025 [2024-12-01 15:03:43.124046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:10.025 [2024-12-01 15:03:43.128040] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb2420) with pdu=0x2000190fef90 00:23:10.025 [2024-12-01 15:03:43.128334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.025 [2024-12-01 15:03:43.128367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:10.025 [2024-12-01 15:03:43.132382] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb2420) with pdu=0x2000190fef90 00:23:10.025 [2024-12-01 15:03:43.132494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.025 [2024-12-01 15:03:43.132515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:10.286 [2024-12-01 15:03:43.137107] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb2420) with pdu=0x2000190fef90 00:23:10.286 [2024-12-01 15:03:43.137279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.286 [2024-12-01 15:03:43.137299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:10.286 [2024-12-01 15:03:43.141424] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb2420) with pdu=0x2000190fef90 00:23:10.286 [2024-12-01 15:03:43.141626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.286 [2024-12-01 15:03:43.141647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:10.286 [2024-12-01 15:03:43.145915] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb2420) with pdu=0x2000190fef90 00:23:10.286 [2024-12-01 15:03:43.146084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.286 [2024-12-01 15:03:43.146105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:10.286 [2024-12-01 15:03:43.150039] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb2420) with pdu=0x2000190fef90 00:23:10.286 [2024-12-01 15:03:43.150141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.286 [2024-12-01 15:03:43.150162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:10.286 [2024-12-01 15:03:43.154216] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb2420) with pdu=0x2000190fef90 00:23:10.286 [2024-12-01 15:03:43.154307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.286 [2024-12-01 15:03:43.154329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:10.286 [2024-12-01 15:03:43.158458] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb2420) with pdu=0x2000190fef90 00:23:10.286 [2024-12-01 15:03:43.158626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.286 [2024-12-01 15:03:43.158646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:10.286 [2024-12-01 15:03:43.162580] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb2420) with pdu=0x2000190fef90 00:23:10.286 [2024-12-01 15:03:43.162725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.286 [2024-12-01 15:03:43.162746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:10.286 [2024-12-01 15:03:43.166734] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb2420) with pdu=0x2000190fef90 00:23:10.286 [2024-12-01 15:03:43.166861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.286 [2024-12-01 15:03:43.166883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:10.286 [2024-12-01 15:03:43.170994] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb2420) with pdu=0x2000190fef90 00:23:10.286 [2024-12-01 15:03:43.171134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.286 [2024-12-01 15:03:43.171172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:10.286 [2024-12-01 15:03:43.175115] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb2420) with pdu=0x2000190fef90 00:23:10.286 [2024-12-01 15:03:43.175232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.286 [2024-12-01 15:03:43.175253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:10.286 [2024-12-01 15:03:43.179402] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb2420) with pdu=0x2000190fef90 00:23:10.286 [2024-12-01 15:03:43.179553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.286 [2024-12-01 15:03:43.179574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:10.286 [2024-12-01 15:03:43.183553] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb2420) with pdu=0x2000190fef90 00:23:10.286 [2024-12-01 15:03:43.183687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.286 [2024-12-01 15:03:43.183708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:10.286 [2024-12-01 15:03:43.187874] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb2420) with pdu=0x2000190fef90 00:23:10.286 [2024-12-01 15:03:43.187989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.286 [2024-12-01 15:03:43.188010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:10.286 [2024-12-01 15:03:43.192141] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb2420) with pdu=0x2000190fef90 00:23:10.286 [2024-12-01 15:03:43.192312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.286 [2024-12-01 15:03:43.192333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:10.286 [2024-12-01 15:03:43.196362] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb2420) with pdu=0x2000190fef90 00:23:10.286 [2024-12-01 15:03:43.196554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.286 [2024-12-01 15:03:43.196576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:10.286 [2024-12-01 15:03:43.200602] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb2420) with pdu=0x2000190fef90 00:23:10.286 [2024-12-01 15:03:43.200756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.286 [2024-12-01 15:03:43.200790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:10.286 [2024-12-01 15:03:43.204879] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb2420) with pdu=0x2000190fef90 00:23:10.286 [2024-12-01 15:03:43.205043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.287 [2024-12-01 15:03:43.205064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:10.287 [2024-12-01 15:03:43.209030] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb2420) with pdu=0x2000190fef90 00:23:10.287 [2024-12-01 15:03:43.209176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:64 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.287 [2024-12-01 15:03:43.209197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:10.287 [2024-12-01 15:03:43.213252] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb2420) with pdu=0x2000190fef90 00:23:10.287 [2024-12-01 15:03:43.213429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.287 [2024-12-01 15:03:43.213476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:10.287 [2024-12-01 15:03:43.217337] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb2420) with pdu=0x2000190fef90 00:23:10.287 [2024-12-01 15:03:43.217473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.287 [2024-12-01 15:03:43.217496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:10.287 [2024-12-01 15:03:43.221455] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb2420) with pdu=0x2000190fef90 00:23:10.287 [2024-12-01 15:03:43.221572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.287 [2024-12-01 15:03:43.221595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:10.287 [2024-12-01 15:03:43.225640] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb2420) with pdu=0x2000190fef90 00:23:10.287 [2024-12-01 15:03:43.225853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.287 [2024-12-01 15:03:43.225875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:10.287 [2024-12-01 15:03:43.229804] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb2420) with pdu=0x2000190fef90 00:23:10.287 [2024-12-01 15:03:43.230159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.287 [2024-12-01 15:03:43.230204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:10.287 [2024-12-01 15:03:43.233975] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb2420) with pdu=0x2000190fef90 00:23:10.287 [2024-12-01 15:03:43.234095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.287 [2024-12-01 15:03:43.234115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:10.287 [2024-12-01 15:03:43.238169] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb2420) with pdu=0x2000190fef90 00:23:10.287 [2024-12-01 15:03:43.238332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.287 [2024-12-01 15:03:43.238353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:10.287 [2024-12-01 15:03:43.242291] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb2420) with pdu=0x2000190fef90 00:23:10.287 [2024-12-01 15:03:43.242450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.287 [2024-12-01 15:03:43.242470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:10.287 [2024-12-01 15:03:43.246467] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb2420) with pdu=0x2000190fef90 00:23:10.287 [2024-12-01 15:03:43.246593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:0 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.287 [2024-12-01 15:03:43.246614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:10.287 [2024-12-01 15:03:43.250702] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb2420) with pdu=0x2000190fef90 00:23:10.287 [2024-12-01 15:03:43.250881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.287 [2024-12-01 15:03:43.250903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:10.287 [2024-12-01 15:03:43.254878] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb2420) with pdu=0x2000190fef90 00:23:10.287 [2024-12-01 15:03:43.254985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.287 [2024-12-01 15:03:43.255006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:10.287 [2024-12-01 15:03:43.259043] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb2420) with pdu=0x2000190fef90 00:23:10.287 [2024-12-01 15:03:43.259238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.287 [2024-12-01 15:03:43.259259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:10.287 [2024-12-01 15:03:43.263258] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb2420) with pdu=0x2000190fef90 00:23:10.287 [2024-12-01 15:03:43.263356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:96 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.287 [2024-12-01 15:03:43.263378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:10.287 [2024-12-01 15:03:43.267361] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb2420) with pdu=0x2000190fef90 00:23:10.287 [2024-12-01 15:03:43.267453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.287 [2024-12-01 15:03:43.267475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:10.287 [2024-12-01 15:03:43.271521] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb2420) with pdu=0x2000190fef90 00:23:10.287 [2024-12-01 15:03:43.271689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.287 [2024-12-01 15:03:43.271710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:10.287 [2024-12-01 15:03:43.275611] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb2420) with pdu=0x2000190fef90 00:23:10.287 [2024-12-01 15:03:43.275812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.287 [2024-12-01 15:03:43.275833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:10.287 [2024-12-01 15:03:43.279793] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb2420) with pdu=0x2000190fef90 00:23:10.287 [2024-12-01 15:03:43.279891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.287 [2024-12-01 15:03:43.279912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:10.287 [2024-12-01 15:03:43.284001] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb2420) with pdu=0x2000190fef90 00:23:10.287 [2024-12-01 15:03:43.284203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.287 [2024-12-01 15:03:43.284239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:10.287 [2024-12-01 15:03:43.288138] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb2420) with pdu=0x2000190fef90 00:23:10.287 [2024-12-01 15:03:43.288235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.287 [2024-12-01 15:03:43.288256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:10.287 [2024-12-01 15:03:43.292267] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb2420) with pdu=0x2000190fef90 00:23:10.287 [2024-12-01 15:03:43.292419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.287 [2024-12-01 15:03:43.292440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:10.287 [2024-12-01 15:03:43.296353] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb2420) with pdu=0x2000190fef90 00:23:10.287 [2024-12-01 15:03:43.296552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.287 [2024-12-01 15:03:43.296573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:10.287 [2024-12-01 15:03:43.300438] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb2420) with pdu=0x2000190fef90 00:23:10.287 [2024-12-01 15:03:43.300526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.287 [2024-12-01 15:03:43.300547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:10.287 [2024-12-01 15:03:43.304553] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb2420) with pdu=0x2000190fef90 00:23:10.287 [2024-12-01 15:03:43.304721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.287 [2024-12-01 15:03:43.304742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:10.287 [2024-12-01 15:03:43.308766] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb2420) with pdu=0x2000190fef90 00:23:10.287 [2024-12-01 15:03:43.309018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.287 [2024-12-01 15:03:43.309081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:10.287 [2024-12-01 15:03:43.312966] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb2420) with pdu=0x2000190fef90 00:23:10.287 [2024-12-01 15:03:43.313082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.287 [2024-12-01 15:03:43.313103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:10.287 [2024-12-01 15:03:43.317202] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb2420) with pdu=0x2000190fef90 00:23:10.288 [2024-12-01 15:03:43.317357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.288 [2024-12-01 15:03:43.317379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:10.288 [2024-12-01 15:03:43.321308] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb2420) with pdu=0x2000190fef90 00:23:10.288 [2024-12-01 15:03:43.321427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.288 [2024-12-01 15:03:43.321477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:10.288 [2024-12-01 15:03:43.325632] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb2420) with pdu=0x2000190fef90 00:23:10.288 [2024-12-01 15:03:43.325883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.288 [2024-12-01 15:03:43.325905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:10.288 [2024-12-01 15:03:43.329954] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb2420) with pdu=0x2000190fef90 00:23:10.288 [2024-12-01 15:03:43.330087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.288 [2024-12-01 15:03:43.330108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:10.288 [2024-12-01 15:03:43.334057] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb2420) with pdu=0x2000190fef90 00:23:10.288 [2024-12-01 15:03:43.334157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:96 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.288 [2024-12-01 15:03:43.334177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:10.288 [2024-12-01 15:03:43.338190] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb2420) with pdu=0x2000190fef90 00:23:10.288 [2024-12-01 15:03:43.338365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.288 [2024-12-01 15:03:43.338387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:10.288 [2024-12-01 15:03:43.342359] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb2420) with pdu=0x2000190fef90 00:23:10.288 [2024-12-01 15:03:43.342580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.288 [2024-12-01 15:03:43.342600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:10.288 [2024-12-01 15:03:43.346633] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb2420) with pdu=0x2000190fef90 00:23:10.288 [2024-12-01 15:03:43.346832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.288 [2024-12-01 15:03:43.346853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:10.288 [2024-12-01 15:03:43.350775] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb2420) with pdu=0x2000190fef90 00:23:10.288 [2024-12-01 15:03:43.350958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.288 [2024-12-01 15:03:43.350979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:10.288 [2024-12-01 15:03:43.354871] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb2420) with pdu=0x2000190fef90 00:23:10.288 [2024-12-01 15:03:43.355052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.288 [2024-12-01 15:03:43.355072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:10.288 [2024-12-01 15:03:43.359015] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb2420) with pdu=0x2000190fef90 00:23:10.288 [2024-12-01 15:03:43.359223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.288 [2024-12-01 15:03:43.359244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:10.288 [2024-12-01 15:03:43.363123] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb2420) with pdu=0x2000190fef90 00:23:10.288 [2024-12-01 15:03:43.363266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.288 [2024-12-01 15:03:43.363286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:10.288 [2024-12-01 15:03:43.367191] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb2420) with pdu=0x2000190fef90 00:23:10.288 [2024-12-01 15:03:43.367304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.288 [2024-12-01 15:03:43.367325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:10.288 [2024-12-01 15:03:43.371405] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb2420) with pdu=0x2000190fef90 00:23:10.288 [2024-12-01 15:03:43.371583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.288 [2024-12-01 15:03:43.371604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:10.288 [2024-12-01 15:03:43.375521] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb2420) with pdu=0x2000190fef90 00:23:10.288 [2024-12-01 15:03:43.375713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.288 [2024-12-01 15:03:43.375734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:10.288 [2024-12-01 15:03:43.379693] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb2420) with pdu=0x2000190fef90 00:23:10.288 [2024-12-01 15:03:43.379909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.288 [2024-12-01 15:03:43.379930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:10.288 [2024-12-01 15:03:43.383961] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb2420) with pdu=0x2000190fef90 00:23:10.288 [2024-12-01 15:03:43.384110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.288 [2024-12-01 15:03:43.384131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:10.288 [2024-12-01 15:03:43.388033] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb2420) with pdu=0x2000190fef90 00:23:10.288 [2024-12-01 15:03:43.388121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.288 [2024-12-01 15:03:43.388141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:10.288 [2024-12-01 15:03:43.392276] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb2420) with pdu=0x2000190fef90 00:23:10.288 [2024-12-01 15:03:43.392456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.288 [2024-12-01 15:03:43.392478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:10.288 [2024-12-01 15:03:43.396793] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb2420) with pdu=0x2000190fef90 00:23:10.288 [2024-12-01 15:03:43.396907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.288 [2024-12-01 15:03:43.396928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:10.548 [2024-12-01 15:03:43.401301] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb2420) with pdu=0x2000190fef90 00:23:10.548 [2024-12-01 15:03:43.401428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.548 [2024-12-01 15:03:43.401479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:10.548 [2024-12-01 15:03:43.405890] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb2420) with pdu=0x2000190fef90 00:23:10.548 [2024-12-01 15:03:43.406054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.548 [2024-12-01 15:03:43.406075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:10.548 [2024-12-01 15:03:43.410070] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb2420) with pdu=0x2000190fef90 00:23:10.548 [2024-12-01 15:03:43.410395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.548 [2024-12-01 15:03:43.410433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:10.548 [2024-12-01 15:03:43.414252] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb2420) with pdu=0x2000190fef90 00:23:10.548 [2024-12-01 15:03:43.414342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.548 [2024-12-01 15:03:43.414363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:10.548 [2024-12-01 15:03:43.418568] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb2420) with pdu=0x2000190fef90 00:23:10.548 [2024-12-01 15:03:43.418698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.548 [2024-12-01 15:03:43.418718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:10.548 [2024-12-01 15:03:43.422775] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb2420) with pdu=0x2000190fef90 00:23:10.548 [2024-12-01 15:03:43.422939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.548 [2024-12-01 15:03:43.422960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:10.548 [2024-12-01 15:03:43.427012] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb2420) with pdu=0x2000190fef90 00:23:10.548 [2024-12-01 15:03:43.427175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.548 [2024-12-01 15:03:43.427196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:10.548 [2024-12-01 15:03:43.431222] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb2420) with pdu=0x2000190fef90 00:23:10.549 [2024-12-01 15:03:43.431331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.549 [2024-12-01 15:03:43.431351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:10.549 [2024-12-01 15:03:43.435319] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb2420) with pdu=0x2000190fef90 00:23:10.549 [2024-12-01 15:03:43.435438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.549 [2024-12-01 15:03:43.435459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:10.549 [2024-12-01 15:03:43.439496] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb2420) with pdu=0x2000190fef90 00:23:10.549 [2024-12-01 15:03:43.439659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.549 [2024-12-01 15:03:43.439679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:10.549 [2024-12-01 15:03:43.443601] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb2420) with pdu=0x2000190fef90 00:23:10.549 [2024-12-01 15:03:43.443837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.549 [2024-12-01 15:03:43.443858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:10.549 [2024-12-01 15:03:43.447693] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb2420) with pdu=0x2000190fef90 00:23:10.549 [2024-12-01 15:03:43.447844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.549 [2024-12-01 15:03:43.447865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:10.549 [2024-12-01 15:03:43.451967] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb2420) with pdu=0x2000190fef90 00:23:10.549 [2024-12-01 15:03:43.452142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.549 [2024-12-01 15:03:43.452163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:10.549 [2024-12-01 15:03:43.456067] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb2420) with pdu=0x2000190fef90 00:23:10.549 [2024-12-01 15:03:43.456193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.549 [2024-12-01 15:03:43.456213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:10.549 [2024-12-01 15:03:43.460173] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb2420) with pdu=0x2000190fef90 00:23:10.549 [2024-12-01 15:03:43.460325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.549 [2024-12-01 15:03:43.460347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:10.549 [2024-12-01 15:03:43.464308] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb2420) with pdu=0x2000190fef90 00:23:10.549 [2024-12-01 15:03:43.464452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.549 [2024-12-01 15:03:43.464473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:10.549 [2024-12-01 15:03:43.468380] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb2420) with pdu=0x2000190fef90 00:23:10.549 [2024-12-01 15:03:43.468495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.549 [2024-12-01 15:03:43.468516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:10.549 [2024-12-01 15:03:43.472562] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb2420) with pdu=0x2000190fef90 00:23:10.549 [2024-12-01 15:03:43.472729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.549 [2024-12-01 15:03:43.472750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:10.549 [2024-12-01 15:03:43.476747] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb2420) with pdu=0x2000190fef90 00:23:10.549 [2024-12-01 15:03:43.477049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.549 [2024-12-01 15:03:43.477086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:10.549 [2024-12-01 15:03:43.480854] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb2420) with pdu=0x2000190fef90 00:23:10.549 [2024-12-01 15:03:43.480969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.549 [2024-12-01 15:03:43.480989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:10.549 [2024-12-01 15:03:43.485090] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb2420) with pdu=0x2000190fef90 00:23:10.549 [2024-12-01 15:03:43.485184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.549 [2024-12-01 15:03:43.485204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:10.549 [2024-12-01 15:03:43.489200] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb2420) with pdu=0x2000190fef90 00:23:10.549 [2024-12-01 15:03:43.489300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.549 [2024-12-01 15:03:43.489320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:10.549 [2024-12-01 15:03:43.493224] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb2420) with pdu=0x2000190fef90 00:23:10.549 [2024-12-01 15:03:43.493341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.549 [2024-12-01 15:03:43.493362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:10.549 [2024-12-01 15:03:43.497661] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb2420) with pdu=0x2000190fef90 00:23:10.549 [2024-12-01 15:03:43.497885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.549 [2024-12-01 15:03:43.497906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:10.549 [2024-12-01 15:03:43.501805] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb2420) with pdu=0x2000190fef90 00:23:10.549 [2024-12-01 15:03:43.501932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.549 [2024-12-01 15:03:43.501952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:10.549 [2024-12-01 15:03:43.506023] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb2420) with pdu=0x2000190fef90 00:23:10.549 [2024-12-01 15:03:43.506192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.549 [2024-12-01 15:03:43.506213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:10.549 [2024-12-01 15:03:43.510046] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb2420) with pdu=0x2000190fef90 00:23:10.549 [2024-12-01 15:03:43.510198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.549 [2024-12-01 15:03:43.510219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:10.549 [2024-12-01 15:03:43.514166] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb2420) with pdu=0x2000190fef90 00:23:10.549 [2024-12-01 15:03:43.514291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.549 [2024-12-01 15:03:43.514311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:10.549 [2024-12-01 15:03:43.518381] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb2420) with pdu=0x2000190fef90 00:23:10.549 [2024-12-01 15:03:43.518552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:32 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.549 [2024-12-01 15:03:43.518573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:10.549 [2024-12-01 15:03:43.522442] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb2420) with pdu=0x2000190fef90 00:23:10.549 [2024-12-01 15:03:43.522541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.549 [2024-12-01 15:03:43.522563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:10.549 [2024-12-01 15:03:43.526585] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb2420) with pdu=0x2000190fef90 00:23:10.549 [2024-12-01 15:03:43.526796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.549 [2024-12-01 15:03:43.526817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:10.549 [2024-12-01 15:03:43.530626] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb2420) with pdu=0x2000190fef90 00:23:10.549 [2024-12-01 15:03:43.530808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.549 [2024-12-01 15:03:43.530831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:10.549 [2024-12-01 15:03:43.534716] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb2420) with pdu=0x2000190fef90 00:23:10.549 [2024-12-01 15:03:43.534892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.549 [2024-12-01 15:03:43.534914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:10.549 [2024-12-01 15:03:43.538843] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb2420) with pdu=0x2000190fef90 00:23:10.549 [2024-12-01 15:03:43.539022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.550 [2024-12-01 15:03:43.539043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:10.550 [2024-12-01 15:03:43.542818] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb2420) with pdu=0x2000190fef90 00:23:10.550 [2024-12-01 15:03:43.543106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.550 [2024-12-01 15:03:43.543154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:10.550 [2024-12-01 15:03:43.546871] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb2420) with pdu=0x2000190fef90 00:23:10.550 [2024-12-01 15:03:43.547059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.550 [2024-12-01 15:03:43.547080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:10.550 [2024-12-01 15:03:43.551187] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb2420) with pdu=0x2000190fef90 00:23:10.550 [2024-12-01 15:03:43.551357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.550 [2024-12-01 15:03:43.551377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:10.550 [2024-12-01 15:03:43.555265] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb2420) with pdu=0x2000190fef90 00:23:10.550 [2024-12-01 15:03:43.555380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.550 [2024-12-01 15:03:43.555401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:10.550 [2024-12-01 15:03:43.559472] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb2420) with pdu=0x2000190fef90 00:23:10.550 [2024-12-01 15:03:43.559614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.550 [2024-12-01 15:03:43.559635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:10.550 [2024-12-01 15:03:43.563626] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb2420) with pdu=0x2000190fef90 00:23:10.550 [2024-12-01 15:03:43.563718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.550 [2024-12-01 15:03:43.563739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:10.550 [2024-12-01 15:03:43.567820] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb2420) with pdu=0x2000190fef90 00:23:10.550 [2024-12-01 15:03:43.567938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.550 [2024-12-01 15:03:43.567960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:10.550 [2024-12-01 15:03:43.572076] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb2420) with pdu=0x2000190fef90 00:23:10.550 [2024-12-01 15:03:43.572269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.550 [2024-12-01 15:03:43.572290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:10.550 [2024-12-01 15:03:43.576437] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb2420) with pdu=0x2000190fef90 00:23:10.550 [2024-12-01 15:03:43.576668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.550 [2024-12-01 15:03:43.576688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:10.550 [2024-12-01 15:03:43.580685] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb2420) with pdu=0x2000190fef90 00:23:10.550 [2024-12-01 15:03:43.580934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.550 [2024-12-01 15:03:43.580958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:10.550 [2024-12-01 15:03:43.584933] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb2420) with pdu=0x2000190fef90 00:23:10.550 [2024-12-01 15:03:43.585102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.550 [2024-12-01 15:03:43.585123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:10.550 [2024-12-01 15:03:43.589111] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb2420) with pdu=0x2000190fef90 00:23:10.550 [2024-12-01 15:03:43.589239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.550 [2024-12-01 15:03:43.589260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:10.550 [2024-12-01 15:03:43.593401] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb2420) with pdu=0x2000190fef90 00:23:10.550 [2024-12-01 15:03:43.593599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.550 [2024-12-01 15:03:43.593620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:10.550 [2024-12-01 15:03:43.597658] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb2420) with pdu=0x2000190fef90 00:23:10.550 [2024-12-01 15:03:43.597757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.550 [2024-12-01 15:03:43.597834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:10.550 [2024-12-01 15:03:43.601951] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb2420) with pdu=0x2000190fef90 00:23:10.550 [2024-12-01 15:03:43.602076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.550 [2024-12-01 15:03:43.602096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:10.550 [2024-12-01 15:03:43.606267] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb2420) with pdu=0x2000190fef90 00:23:10.550 [2024-12-01 15:03:43.606434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.550 [2024-12-01 15:03:43.606455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:10.550 [2024-12-01 15:03:43.610439] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb2420) with pdu=0x2000190fef90 00:23:10.550 [2024-12-01 15:03:43.610657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.550 [2024-12-01 15:03:43.610678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:10.550 [2024-12-01 15:03:43.614717] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb2420) with pdu=0x2000190fef90 00:23:10.550 [2024-12-01 15:03:43.614952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.550 [2024-12-01 15:03:43.614973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:10.550 [2024-12-01 15:03:43.618926] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb2420) with pdu=0x2000190fef90 00:23:10.550 [2024-12-01 15:03:43.619115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.550 [2024-12-01 15:03:43.619135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:10.550 [2024-12-01 15:03:43.623115] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb2420) with pdu=0x2000190fef90 00:23:10.550 [2024-12-01 15:03:43.623235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.550 [2024-12-01 15:03:43.623256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:10.550 [2024-12-01 15:03:43.627354] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb2420) with pdu=0x2000190fef90 00:23:10.550 [2024-12-01 15:03:43.627527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.550 [2024-12-01 15:03:43.627548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:10.550 [2024-12-01 15:03:43.631607] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb2420) with pdu=0x2000190fef90 00:23:10.550 [2024-12-01 15:03:43.631715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.550 [2024-12-01 15:03:43.631736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:10.550 [2024-12-01 15:03:43.635842] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb2420) with pdu=0x2000190fef90 00:23:10.550 [2024-12-01 15:03:43.635982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.550 [2024-12-01 15:03:43.636003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:10.550 [2024-12-01 15:03:43.640183] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb2420) with pdu=0x2000190fef90 00:23:10.550 [2024-12-01 15:03:43.640346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.550 [2024-12-01 15:03:43.640366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:10.550 [2024-12-01 15:03:43.644482] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb2420) with pdu=0x2000190fef90 00:23:10.550 [2024-12-01 15:03:43.644751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.550 [2024-12-01 15:03:43.644842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:10.550 [2024-12-01 15:03:43.648639] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb2420) with pdu=0x2000190fef90 00:23:10.550 [2024-12-01 15:03:43.648742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.550 [2024-12-01 15:03:43.648780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:10.550 [2024-12-01 15:03:43.652928] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb2420) with pdu=0x2000190fef90 00:23:10.550 [2024-12-01 15:03:43.653150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.551 [2024-12-01 15:03:43.653187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:10.551 [2024-12-01 15:03:43.657279] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb2420) with pdu=0x2000190fef90 00:23:10.551 [2024-12-01 15:03:43.657407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.551 [2024-12-01 15:03:43.657427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:10.810 [2024-12-01 15:03:43.662170] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb2420) with pdu=0x2000190fef90 00:23:10.810 [2024-12-01 15:03:43.662326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.810 [2024-12-01 15:03:43.662347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:10.810 [2024-12-01 15:03:43.666467] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb2420) with pdu=0x2000190fef90 00:23:10.810 [2024-12-01 15:03:43.666611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.810 [2024-12-01 15:03:43.666632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:10.810 [2024-12-01 15:03:43.670929] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb2420) with pdu=0x2000190fef90 00:23:10.810 [2024-12-01 15:03:43.671024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.810 [2024-12-01 15:03:43.671045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:10.810 [2024-12-01 15:03:43.675185] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb2420) with pdu=0x2000190fef90 00:23:10.810 [2024-12-01 15:03:43.675351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.810 [2024-12-01 15:03:43.675372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:10.810 [2024-12-01 15:03:43.679472] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb2420) with pdu=0x2000190fef90 00:23:10.810 [2024-12-01 15:03:43.679725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.810 [2024-12-01 15:03:43.679796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:10.810 [2024-12-01 15:03:43.683581] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb2420) with pdu=0x2000190fef90 00:23:10.810 [2024-12-01 15:03:43.683751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.810 [2024-12-01 15:03:43.683786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:10.810 [2024-12-01 15:03:43.687840] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb2420) with pdu=0x2000190fef90 00:23:10.810 [2024-12-01 15:03:43.687981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.810 [2024-12-01 15:03:43.688001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:10.810 [2024-12-01 15:03:43.691964] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb2420) with pdu=0x2000190fef90 00:23:10.810 [2024-12-01 15:03:43.692072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.810 [2024-12-01 15:03:43.692093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:10.810 [2024-12-01 15:03:43.696163] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb2420) with pdu=0x2000190fef90 00:23:10.810 [2024-12-01 15:03:43.696326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.810 [2024-12-01 15:03:43.696346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:10.810 [2024-12-01 15:03:43.700305] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb2420) with pdu=0x2000190fef90 00:23:10.810 [2024-12-01 15:03:43.700435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.810 [2024-12-01 15:03:43.700455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:10.810 [2024-12-01 15:03:43.704547] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb2420) with pdu=0x2000190fef90 00:23:10.811 [2024-12-01 15:03:43.704651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.811 [2024-12-01 15:03:43.704673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:10.811 [2024-12-01 15:03:43.708695] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb2420) with pdu=0x2000190fef90 00:23:10.811 [2024-12-01 15:03:43.708878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.811 [2024-12-01 15:03:43.708900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:10.811 [2024-12-01 15:03:43.712855] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb2420) with pdu=0x2000190fef90 00:23:10.811 [2024-12-01 15:03:43.713172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.811 [2024-12-01 15:03:43.713216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:10.811 [2024-12-01 15:03:43.716950] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb2420) with pdu=0x2000190fef90 00:23:10.811 [2024-12-01 15:03:43.717056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.811 [2024-12-01 15:03:43.717077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:10.811 [2024-12-01 15:03:43.721105] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb2420) with pdu=0x2000190fef90 00:23:10.811 [2024-12-01 15:03:43.721271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.811 [2024-12-01 15:03:43.721291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:10.811 [2024-12-01 15:03:43.725171] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb2420) with pdu=0x2000190fef90 00:23:10.811 [2024-12-01 15:03:43.725380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.811 [2024-12-01 15:03:43.725401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:10.811 [2024-12-01 15:03:43.729412] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb2420) with pdu=0x2000190fef90 00:23:10.811 [2024-12-01 15:03:43.729635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.811 [2024-12-01 15:03:43.729657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:10.811 [2024-12-01 15:03:43.733564] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb2420) with pdu=0x2000190fef90 00:23:10.811 [2024-12-01 15:03:43.733724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.811 [2024-12-01 15:03:43.733746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:10.811 [2024-12-01 15:03:43.737719] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb2420) with pdu=0x2000190fef90 00:23:10.811 [2024-12-01 15:03:43.737846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.811 [2024-12-01 15:03:43.737867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:10.811 [2024-12-01 15:03:43.741987] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb2420) with pdu=0x2000190fef90 00:23:10.811 [2024-12-01 15:03:43.742166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.811 [2024-12-01 15:03:43.742186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:10.811 [2024-12-01 15:03:43.746100] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb2420) with pdu=0x2000190fef90 00:23:10.811 [2024-12-01 15:03:43.746257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.811 [2024-12-01 15:03:43.746278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:10.811 [2024-12-01 15:03:43.750207] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb2420) with pdu=0x2000190fef90 00:23:10.811 [2024-12-01 15:03:43.750323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.811 [2024-12-01 15:03:43.750343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:10.811 [2024-12-01 15:03:43.754401] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb2420) with pdu=0x2000190fef90 00:23:10.811 [2024-12-01 15:03:43.754590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.811 [2024-12-01 15:03:43.754611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:10.811 [2024-12-01 15:03:43.758522] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb2420) with pdu=0x2000190fef90 00:23:10.811 [2024-12-01 15:03:43.758786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.811 [2024-12-01 15:03:43.758848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:10.811 [2024-12-01 15:03:43.762801] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb2420) with pdu=0x2000190fef90 00:23:10.811 [2024-12-01 15:03:43.762920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.811 [2024-12-01 15:03:43.762941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:10.811 [2024-12-01 15:03:43.767030] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb2420) with pdu=0x2000190fef90 00:23:10.811 [2024-12-01 15:03:43.767223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.811 [2024-12-01 15:03:43.767244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:10.811 [2024-12-01 15:03:43.771117] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb2420) with pdu=0x2000190fef90 00:23:10.811 [2024-12-01 15:03:43.771217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.811 [2024-12-01 15:03:43.771238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:10.811 [2024-12-01 15:03:43.775251] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb2420) with pdu=0x2000190fef90 00:23:10.811 [2024-12-01 15:03:43.775397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.811 [2024-12-01 15:03:43.775418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:10.811 [2024-12-01 15:03:43.779434] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb2420) with pdu=0x2000190fef90 00:23:10.811 [2024-12-01 15:03:43.779551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.811 [2024-12-01 15:03:43.779573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:10.811 [2024-12-01 15:03:43.783480] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb2420) with pdu=0x2000190fef90 00:23:10.811 [2024-12-01 15:03:43.783572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.811 [2024-12-01 15:03:43.783593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:10.811 [2024-12-01 15:03:43.787655] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb2420) with pdu=0x2000190fef90 00:23:10.811 [2024-12-01 15:03:43.787846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.811 [2024-12-01 15:03:43.787868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:10.811 [2024-12-01 15:03:43.791857] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb2420) with pdu=0x2000190fef90 00:23:10.811 [2024-12-01 15:03:43.792133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.811 [2024-12-01 15:03:43.792168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:10.811 [2024-12-01 15:03:43.795850] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb2420) with pdu=0x2000190fef90 00:23:10.811 [2024-12-01 15:03:43.795941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.811 [2024-12-01 15:03:43.795962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:10.811 [2024-12-01 15:03:43.800059] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb2420) with pdu=0x2000190fef90 00:23:10.811 [2024-12-01 15:03:43.800206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.811 [2024-12-01 15:03:43.800227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:10.811 [2024-12-01 15:03:43.804313] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb2420) with pdu=0x2000190fef90 00:23:10.811 [2024-12-01 15:03:43.804477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.811 [2024-12-01 15:03:43.804497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:10.811 [2024-12-01 15:03:43.808474] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb2420) with pdu=0x2000190fef90 00:23:10.811 [2024-12-01 15:03:43.808619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.811 [2024-12-01 15:03:43.808640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:10.811 [2024-12-01 15:03:43.812640] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb2420) with pdu=0x2000190fef90 00:23:10.812 [2024-12-01 15:03:43.812736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.812 [2024-12-01 15:03:43.812757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:10.812 [2024-12-01 15:03:43.816785] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb2420) with pdu=0x2000190fef90 00:23:10.812 [2024-12-01 15:03:43.816913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.812 [2024-12-01 15:03:43.816933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:10.812 [2024-12-01 15:03:43.820889] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb2420) with pdu=0x2000190fef90 00:23:10.812 [2024-12-01 15:03:43.821070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.812 [2024-12-01 15:03:43.821092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:10.812 [2024-12-01 15:03:43.825058] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb2420) with pdu=0x2000190fef90 00:23:10.812 [2024-12-01 15:03:43.825292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.812 [2024-12-01 15:03:43.825335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:10.812 [2024-12-01 15:03:43.829402] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb2420) with pdu=0x2000190fef90 00:23:10.812 [2024-12-01 15:03:43.829620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.812 [2024-12-01 15:03:43.829641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:10.812 [2024-12-01 15:03:43.833536] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb2420) with pdu=0x2000190fef90 00:23:10.812 [2024-12-01 15:03:43.833695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.812 [2024-12-01 15:03:43.833717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:10.812 [2024-12-01 15:03:43.837610] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb2420) with pdu=0x2000190fef90 00:23:10.812 [2024-12-01 15:03:43.837732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.812 [2024-12-01 15:03:43.837753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:10.812 [2024-12-01 15:03:43.841699] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb2420) with pdu=0x2000190fef90 00:23:10.812 [2024-12-01 15:03:43.841882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.812 [2024-12-01 15:03:43.841905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:10.812 [2024-12-01 15:03:43.845875] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb2420) with pdu=0x2000190fef90 00:23:10.812 [2024-12-01 15:03:43.846005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.812 [2024-12-01 15:03:43.846026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:10.812 [2024-12-01 15:03:43.849977] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb2420) with pdu=0x2000190fef90 00:23:10.812 [2024-12-01 15:03:43.850089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.812 [2024-12-01 15:03:43.850111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:10.812 [2024-12-01 15:03:43.854181] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb2420) with pdu=0x2000190fef90 00:23:10.812 [2024-12-01 15:03:43.854349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.812 [2024-12-01 15:03:43.854370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:10.812 [2024-12-01 15:03:43.858138] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb2420) with pdu=0x2000190fef90 00:23:10.812 [2024-12-01 15:03:43.858364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.812 [2024-12-01 15:03:43.858385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:10.812 [2024-12-01 15:03:43.862349] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb2420) with pdu=0x2000190fef90 00:23:10.812 [2024-12-01 15:03:43.862583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.812 [2024-12-01 15:03:43.862613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:10.812 [2024-12-01 15:03:43.866474] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb2420) with pdu=0x2000190fef90 00:23:10.812 [2024-12-01 15:03:43.866667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.812 [2024-12-01 15:03:43.866688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:10.812 [2024-12-01 15:03:43.870587] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb2420) with pdu=0x2000190fef90 00:23:10.812 [2024-12-01 15:03:43.870694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.812 [2024-12-01 15:03:43.870716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:10.812 [2024-12-01 15:03:43.874774] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb2420) with pdu=0x2000190fef90 00:23:10.812 [2024-12-01 15:03:43.874919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.812 [2024-12-01 15:03:43.874940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:10.812 [2024-12-01 15:03:43.878865] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb2420) with pdu=0x2000190fef90 00:23:10.812 [2024-12-01 15:03:43.879010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.812 [2024-12-01 15:03:43.879031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:10.812 [2024-12-01 15:03:43.882893] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb2420) with pdu=0x2000190fef90 00:23:10.812 [2024-12-01 15:03:43.883003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.812 [2024-12-01 15:03:43.883023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:10.812 [2024-12-01 15:03:43.887145] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb2420) with pdu=0x2000190fef90 00:23:10.812 [2024-12-01 15:03:43.887309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.812 [2024-12-01 15:03:43.887330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:10.812 [2024-12-01 15:03:43.891241] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb2420) with pdu=0x2000190fef90 00:23:10.812 [2024-12-01 15:03:43.891401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.812 [2024-12-01 15:03:43.891422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:10.812 [2024-12-01 15:03:43.895369] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb2420) with pdu=0x2000190fef90 00:23:10.812 [2024-12-01 15:03:43.895466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.812 [2024-12-01 15:03:43.895487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:10.812 [2024-12-01 15:03:43.899518] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb2420) with pdu=0x2000190fef90 00:23:10.812 [2024-12-01 15:03:43.899638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.812 [2024-12-01 15:03:43.899660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:10.812 [2024-12-01 15:03:43.903648] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb2420) with pdu=0x2000190fef90 00:23:10.812 [2024-12-01 15:03:43.903745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.812 [2024-12-01 15:03:43.903778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:10.812 [2024-12-01 15:03:43.907829] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb2420) with pdu=0x2000190fef90 00:23:10.812 [2024-12-01 15:03:43.907990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.812 [2024-12-01 15:03:43.908011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:10.812 [2024-12-01 15:03:43.911942] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb2420) with pdu=0x2000190fef90 00:23:10.812 [2024-12-01 15:03:43.912040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.812 [2024-12-01 15:03:43.912061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:10.812 [2024-12-01 15:03:43.916047] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb2420) with pdu=0x2000190fef90 00:23:10.812 [2024-12-01 15:03:43.916140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.813 [2024-12-01 15:03:43.916160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:10.813 [2024-12-01 15:03:43.920414] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb2420) with pdu=0x2000190fef90 00:23:10.813 [2024-12-01 15:03:43.920600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.813 [2024-12-01 15:03:43.920622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:11.072 [2024-12-01 15:03:43.925136] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb2420) with pdu=0x2000190fef90 00:23:11.072 [2024-12-01 15:03:43.925395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.072 [2024-12-01 15:03:43.925487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:11.072 [2024-12-01 15:03:43.929720] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb2420) with pdu=0x2000190fef90 00:23:11.072 [2024-12-01 15:03:43.929978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.072 [2024-12-01 15:03:43.930000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:11.072 [2024-12-01 15:03:43.934048] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb2420) with pdu=0x2000190fef90 00:23:11.072 [2024-12-01 15:03:43.934223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.072 [2024-12-01 15:03:43.934244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:11.072 [2024-12-01 15:03:43.938144] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb2420) with pdu=0x2000190fef90 00:23:11.072 [2024-12-01 15:03:43.938288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.072 [2024-12-01 15:03:43.938309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:11.072 [2024-12-01 15:03:43.942317] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb2420) with pdu=0x2000190fef90 00:23:11.072 [2024-12-01 15:03:43.942473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.072 [2024-12-01 15:03:43.942494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:11.072 [2024-12-01 15:03:43.946612] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb2420) with pdu=0x2000190fef90 00:23:11.072 [2024-12-01 15:03:43.946721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.072 [2024-12-01 15:03:43.946743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:11.072 [2024-12-01 15:03:43.950759] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb2420) with pdu=0x2000190fef90 00:23:11.072 [2024-12-01 15:03:43.950898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.073 [2024-12-01 15:03:43.950919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:11.073 [2024-12-01 15:03:43.954937] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb2420) with pdu=0x2000190fef90 00:23:11.073 [2024-12-01 15:03:43.955108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.073 [2024-12-01 15:03:43.955129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:11.073 [2024-12-01 15:03:43.959128] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb2420) with pdu=0x2000190fef90 00:23:11.073 [2024-12-01 15:03:43.959389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.073 [2024-12-01 15:03:43.959451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:11.073 [2024-12-01 15:03:43.963248] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb2420) with pdu=0x2000190fef90 00:23:11.073 [2024-12-01 15:03:43.963479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.073 [2024-12-01 15:03:43.963500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:11.073 [2024-12-01 15:03:43.967510] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb2420) with pdu=0x2000190fef90 00:23:11.073 [2024-12-01 15:03:43.967724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.073 [2024-12-01 15:03:43.967745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:11.073 [2024-12-01 15:03:43.971697] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb2420) with pdu=0x2000190fef90 00:23:11.073 [2024-12-01 15:03:43.971830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.073 [2024-12-01 15:03:43.971867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:11.073 [2024-12-01 15:03:43.976054] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb2420) with pdu=0x2000190fef90 00:23:11.073 [2024-12-01 15:03:43.976268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.073 [2024-12-01 15:03:43.976292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:11.073 [2024-12-01 15:03:43.980302] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb2420) with pdu=0x2000190fef90 00:23:11.073 [2024-12-01 15:03:43.980396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.073 [2024-12-01 15:03:43.980418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:11.073 [2024-12-01 15:03:43.984483] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb2420) with pdu=0x2000190fef90 00:23:11.073 [2024-12-01 15:03:43.984594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.073 [2024-12-01 15:03:43.984616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:11.073 [2024-12-01 15:03:43.988765] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb2420) with pdu=0x2000190fef90 00:23:11.073 [2024-12-01 15:03:43.988936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.073 [2024-12-01 15:03:43.988958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:11.073 [2024-12-01 15:03:43.992960] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb2420) with pdu=0x2000190fef90 00:23:11.073 [2024-12-01 15:03:43.993238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.073 [2024-12-01 15:03:43.993286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:11.073 [2024-12-01 15:03:43.997264] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb2420) with pdu=0x2000190fef90 00:23:11.073 [2024-12-01 15:03:43.997408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.073 [2024-12-01 15:03:43.997429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:11.073 [2024-12-01 15:03:44.001663] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb2420) with pdu=0x2000190fef90 00:23:11.073 [2024-12-01 15:03:44.001856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.073 [2024-12-01 15:03:44.001878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:11.073 [2024-12-01 15:03:44.005853] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb2420) with pdu=0x2000190fef90 00:23:11.073 [2024-12-01 15:03:44.006002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.073 [2024-12-01 15:03:44.006025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:11.073 [2024-12-01 15:03:44.010076] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb2420) with pdu=0x2000190fef90 00:23:11.073 [2024-12-01 15:03:44.010228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.073 [2024-12-01 15:03:44.010250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:11.073 [2024-12-01 15:03:44.014222] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb2420) with pdu=0x2000190fef90 00:23:11.073 [2024-12-01 15:03:44.014362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.073 [2024-12-01 15:03:44.014384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:11.073 [2024-12-01 15:03:44.018517] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb2420) with pdu=0x2000190fef90 00:23:11.073 [2024-12-01 15:03:44.018645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.073 [2024-12-01 15:03:44.018669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:11.073 [2024-12-01 15:03:44.022705] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb2420) with pdu=0x2000190fef90 00:23:11.073 [2024-12-01 15:03:44.022922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.073 [2024-12-01 15:03:44.022945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:11.073 [2024-12-01 15:03:44.026897] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb2420) with pdu=0x2000190fef90 00:23:11.073 [2024-12-01 15:03:44.027114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.073 [2024-12-01 15:03:44.027183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:11.073 [2024-12-01 15:03:44.031025] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb2420) with pdu=0x2000190fef90 00:23:11.073 [2024-12-01 15:03:44.031168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.073 [2024-12-01 15:03:44.031190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:11.073 [2024-12-01 15:03:44.035355] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb2420) with pdu=0x2000190fef90 00:23:11.073 [2024-12-01 15:03:44.035598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.073 [2024-12-01 15:03:44.035623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:11.073 [2024-12-01 15:03:44.039485] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb2420) with pdu=0x2000190fef90 00:23:11.073 [2024-12-01 15:03:44.039604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.073 [2024-12-01 15:03:44.039626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:11.073 [2024-12-01 15:03:44.043705] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb2420) with pdu=0x2000190fef90 00:23:11.073 [2024-12-01 15:03:44.043895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.073 [2024-12-01 15:03:44.043918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:11.073 [2024-12-01 15:03:44.047904] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb2420) with pdu=0x2000190fef90 00:23:11.073 [2024-12-01 15:03:44.048016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.073 [2024-12-01 15:03:44.048038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:11.073 [2024-12-01 15:03:44.052156] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb2420) with pdu=0x2000190fef90 00:23:11.073 [2024-12-01 15:03:44.052276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.073 [2024-12-01 15:03:44.052298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:11.073 [2024-12-01 15:03:44.056382] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb2420) with pdu=0x2000190fef90 00:23:11.073 [2024-12-01 15:03:44.056563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.073 [2024-12-01 15:03:44.056585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:11.073 [2024-12-01 15:03:44.060625] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb2420) with pdu=0x2000190fef90 00:23:11.073 [2024-12-01 15:03:44.060852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:32 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.073 [2024-12-01 15:03:44.060874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:11.073 [2024-12-01 15:03:44.064817] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb2420) with pdu=0x2000190fef90 00:23:11.074 [2024-12-01 15:03:44.065132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.074 [2024-12-01 15:03:44.065156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:11.074 [2024-12-01 15:03:44.069061] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb2420) with pdu=0x2000190fef90 00:23:11.074 [2024-12-01 15:03:44.069237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.074 [2024-12-01 15:03:44.069260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:11.074 [2024-12-01 15:03:44.073285] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb2420) with pdu=0x2000190fef90 00:23:11.074 [2024-12-01 15:03:44.073421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.074 [2024-12-01 15:03:44.073481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:11.074 [2024-12-01 15:03:44.077686] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb2420) with pdu=0x2000190fef90 00:23:11.074 [2024-12-01 15:03:44.077885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.074 [2024-12-01 15:03:44.077935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:11.074 [2024-12-01 15:03:44.082139] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb2420) with pdu=0x2000190fef90 00:23:11.074 [2024-12-01 15:03:44.082342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.074 [2024-12-01 15:03:44.082373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:11.074 [2024-12-01 15:03:44.086340] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb2420) with pdu=0x2000190fef90 00:23:11.074 [2024-12-01 15:03:44.086474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.074 [2024-12-01 15:03:44.086495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:11.074 [2024-12-01 15:03:44.090683] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb2420) with pdu=0x2000190fef90 00:23:11.074 [2024-12-01 15:03:44.090905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.074 [2024-12-01 15:03:44.090927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:11.074 [2024-12-01 15:03:44.094943] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb2420) with pdu=0x2000190fef90 00:23:11.074 [2024-12-01 15:03:44.095103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.074 [2024-12-01 15:03:44.095142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:11.074 [2024-12-01 15:03:44.099167] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb2420) with pdu=0x2000190fef90 00:23:11.074 [2024-12-01 15:03:44.099285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.074 [2024-12-01 15:03:44.099306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:11.074 [2024-12-01 15:03:44.103377] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb2420) with pdu=0x2000190fef90 00:23:11.074 [2024-12-01 15:03:44.103569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.074 [2024-12-01 15:03:44.103591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:11.074 [2024-12-01 15:03:44.107524] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb2420) with pdu=0x2000190fef90 00:23:11.074 [2024-12-01 15:03:44.107633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.074 [2024-12-01 15:03:44.107655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:11.074 [2024-12-01 15:03:44.111674] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb2420) with pdu=0x2000190fef90 00:23:11.074 [2024-12-01 15:03:44.111866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.074 [2024-12-01 15:03:44.111893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:11.074 [2024-12-01 15:03:44.115803] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb2420) with pdu=0x2000190fef90 00:23:11.074 [2024-12-01 15:03:44.115926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.074 [2024-12-01 15:03:44.115948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:11.074 [2024-12-01 15:03:44.119979] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb2420) with pdu=0x2000190fef90 00:23:11.074 [2024-12-01 15:03:44.120071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.074 [2024-12-01 15:03:44.120092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:11.074 [2024-12-01 15:03:44.124164] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb2420) with pdu=0x2000190fef90 00:23:11.074 [2024-12-01 15:03:44.124345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.074 [2024-12-01 15:03:44.124375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:11.074 [2024-12-01 15:03:44.128349] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb2420) with pdu=0x2000190fef90 00:23:11.074 [2024-12-01 15:03:44.128567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.074 [2024-12-01 15:03:44.128589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:11.074 [2024-12-01 15:03:44.132538] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb2420) with pdu=0x2000190fef90 00:23:11.074 [2024-12-01 15:03:44.132776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.074 [2024-12-01 15:03:44.132830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:11.074 [2024-12-01 15:03:44.136599] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb2420) with pdu=0x2000190fef90 00:23:11.074 [2024-12-01 15:03:44.136824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.074 [2024-12-01 15:03:44.136846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:11.074 [2024-12-01 15:03:44.140687] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb2420) with pdu=0x2000190fef90 00:23:11.074 [2024-12-01 15:03:44.140825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.074 [2024-12-01 15:03:44.140847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:11.074 [2024-12-01 15:03:44.144871] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb2420) with pdu=0x2000190fef90 00:23:11.074 [2024-12-01 15:03:44.145028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.074 [2024-12-01 15:03:44.145050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:11.074 [2024-12-01 15:03:44.149030] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb2420) with pdu=0x2000190fef90 00:23:11.074 [2024-12-01 15:03:44.149205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.074 [2024-12-01 15:03:44.149228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:11.074 [2024-12-01 15:03:44.153125] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb2420) with pdu=0x2000190fef90 00:23:11.074 [2024-12-01 15:03:44.153289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.074 [2024-12-01 15:03:44.153312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:11.074 [2024-12-01 15:03:44.157284] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb2420) with pdu=0x2000190fef90 00:23:11.074 [2024-12-01 15:03:44.157476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.074 [2024-12-01 15:03:44.157499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:11.074 [2024-12-01 15:03:44.161423] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb2420) with pdu=0x2000190fef90 00:23:11.074 [2024-12-01 15:03:44.161729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.074 [2024-12-01 15:03:44.161779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:11.074 [2024-12-01 15:03:44.165468] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb2420) with pdu=0x2000190fef90 00:23:11.074 [2024-12-01 15:03:44.165568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.074 [2024-12-01 15:03:44.165590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:11.074 [2024-12-01 15:03:44.169736] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb2420) with pdu=0x2000190fef90 00:23:11.074 [2024-12-01 15:03:44.169954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.074 [2024-12-01 15:03:44.169976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:11.074 [2024-12-01 15:03:44.173887] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb2420) with pdu=0x2000190fef90 00:23:11.074 [2024-12-01 15:03:44.174000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.074 [2024-12-01 15:03:44.174022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:11.074 [2024-12-01 15:03:44.178096] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb2420) with pdu=0x2000190fef90 00:23:11.074 [2024-12-01 15:03:44.178261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.075 [2024-12-01 15:03:44.178282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:11.075 [2024-12-01 15:03:44.182400] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb2420) with pdu=0x2000190fef90 00:23:11.075 [2024-12-01 15:03:44.182529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.075 [2024-12-01 15:03:44.182552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:11.335 [2024-12-01 15:03:44.187296] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb2420) with pdu=0x2000190fef90 00:23:11.335 [2024-12-01 15:03:44.187403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.335 [2024-12-01 15:03:44.187425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:11.335 [2024-12-01 15:03:44.191753] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb2420) with pdu=0x2000190fef90 00:23:11.335 [2024-12-01 15:03:44.191965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.335 [2024-12-01 15:03:44.191986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:11.335 [2024-12-01 15:03:44.195921] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb2420) with pdu=0x2000190fef90 00:23:11.335 [2024-12-01 15:03:44.196179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.335 [2024-12-01 15:03:44.196241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:11.335 [2024-12-01 15:03:44.200043] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb2420) with pdu=0x2000190fef90 00:23:11.335 [2024-12-01 15:03:44.200177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.335 [2024-12-01 15:03:44.200197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:11.335 [2024-12-01 15:03:44.204347] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb2420) with pdu=0x2000190fef90 00:23:11.335 [2024-12-01 15:03:44.204480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.335 [2024-12-01 15:03:44.204501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:11.335 [2024-12-01 15:03:44.208502] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb2420) with pdu=0x2000190fef90 00:23:11.335 [2024-12-01 15:03:44.208599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.335 [2024-12-01 15:03:44.208621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:11.335 [2024-12-01 15:03:44.212636] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb2420) with pdu=0x2000190fef90 00:23:11.335 [2024-12-01 15:03:44.212795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.335 [2024-12-01 15:03:44.212816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:11.335 [2024-12-01 15:03:44.216821] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb2420) with pdu=0x2000190fef90 00:23:11.335 [2024-12-01 15:03:44.216962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.335 [2024-12-01 15:03:44.216984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:11.335 [2024-12-01 15:03:44.221006] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb2420) with pdu=0x2000190fef90 00:23:11.335 [2024-12-01 15:03:44.221116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.335 [2024-12-01 15:03:44.221137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:11.335 [2024-12-01 15:03:44.225332] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb2420) with pdu=0x2000190fef90 00:23:11.335 [2024-12-01 15:03:44.225553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.335 [2024-12-01 15:03:44.225575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:11.335 [2024-12-01 15:03:44.229456] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb2420) with pdu=0x2000190fef90 00:23:11.335 [2024-12-01 15:03:44.229749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.335 [2024-12-01 15:03:44.229787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:11.335 [2024-12-01 15:03:44.233577] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb2420) with pdu=0x2000190fef90 00:23:11.335 [2024-12-01 15:03:44.233664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.335 [2024-12-01 15:03:44.233686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:11.335 [2024-12-01 15:03:44.237850] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb2420) with pdu=0x2000190fef90 00:23:11.335 [2024-12-01 15:03:44.238006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.335 [2024-12-01 15:03:44.238026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:11.335 [2024-12-01 15:03:44.241881] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb2420) with pdu=0x2000190fef90 00:23:11.335 [2024-12-01 15:03:44.241988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.335 [2024-12-01 15:03:44.242009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:11.335 [2024-12-01 15:03:44.246019] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb2420) with pdu=0x2000190fef90 00:23:11.335 [2024-12-01 15:03:44.246166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.335 [2024-12-01 15:03:44.246187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:11.335 [2024-12-01 15:03:44.250167] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb2420) with pdu=0x2000190fef90 00:23:11.335 [2024-12-01 15:03:44.250309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.335 [2024-12-01 15:03:44.250331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:11.335 [2024-12-01 15:03:44.254259] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb2420) with pdu=0x2000190fef90 00:23:11.335 [2024-12-01 15:03:44.254349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.335 [2024-12-01 15:03:44.254370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:11.335 [2024-12-01 15:03:44.258425] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb2420) with pdu=0x2000190fef90 00:23:11.335 [2024-12-01 15:03:44.258610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.335 [2024-12-01 15:03:44.258631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:11.335 [2024-12-01 15:03:44.262548] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb2420) with pdu=0x2000190fef90 00:23:11.335 [2024-12-01 15:03:44.262816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.335 [2024-12-01 15:03:44.262839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:11.335 [2024-12-01 15:03:44.266774] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb2420) with pdu=0x2000190fef90 00:23:11.335 [2024-12-01 15:03:44.267014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.335 [2024-12-01 15:03:44.267037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:11.335 [2024-12-01 15:03:44.270894] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb2420) with pdu=0x2000190fef90 00:23:11.335 [2024-12-01 15:03:44.271019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.335 [2024-12-01 15:03:44.271040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:11.335 [2024-12-01 15:03:44.275000] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb2420) with pdu=0x2000190fef90 00:23:11.335 [2024-12-01 15:03:44.275115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.335 [2024-12-01 15:03:44.275136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:11.335 [2024-12-01 15:03:44.279170] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb2420) with pdu=0x2000190fef90 00:23:11.335 [2024-12-01 15:03:44.279310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.335 [2024-12-01 15:03:44.279331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:11.335 [2024-12-01 15:03:44.283299] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb2420) with pdu=0x2000190fef90 00:23:11.335 [2024-12-01 15:03:44.283443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.335 [2024-12-01 15:03:44.283464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:11.336 [2024-12-01 15:03:44.287423] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb2420) with pdu=0x2000190fef90 00:23:11.336 [2024-12-01 15:03:44.287523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.336 [2024-12-01 15:03:44.287544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:11.336 [2024-12-01 15:03:44.291662] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb2420) with pdu=0x2000190fef90 00:23:11.336 [2024-12-01 15:03:44.291844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.336 [2024-12-01 15:03:44.291865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:11.336 [2024-12-01 15:03:44.295773] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb2420) with pdu=0x2000190fef90 00:23:11.336 [2024-12-01 15:03:44.296013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.336 [2024-12-01 15:03:44.296035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:11.336 [2024-12-01 15:03:44.299769] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb2420) with pdu=0x2000190fef90 00:23:11.336 [2024-12-01 15:03:44.299900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.336 [2024-12-01 15:03:44.299921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:11.336 [2024-12-01 15:03:44.303966] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb2420) with pdu=0x2000190fef90 00:23:11.336 [2024-12-01 15:03:44.304123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.336 [2024-12-01 15:03:44.304144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:11.336 [2024-12-01 15:03:44.308065] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb2420) with pdu=0x2000190fef90 00:23:11.336 [2024-12-01 15:03:44.308191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.336 [2024-12-01 15:03:44.308212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:11.336 [2024-12-01 15:03:44.312208] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb2420) with pdu=0x2000190fef90 00:23:11.336 [2024-12-01 15:03:44.312362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.336 [2024-12-01 15:03:44.312384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:11.336 [2024-12-01 15:03:44.316297] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb2420) with pdu=0x2000190fef90 00:23:11.336 [2024-12-01 15:03:44.316412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.336 [2024-12-01 15:03:44.316432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:11.336 [2024-12-01 15:03:44.320474] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb2420) with pdu=0x2000190fef90 00:23:11.336 [2024-12-01 15:03:44.320579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.336 [2024-12-01 15:03:44.320601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:11.336 [2024-12-01 15:03:44.324704] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb2420) with pdu=0x2000190fef90 00:23:11.336 [2024-12-01 15:03:44.324886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.336 [2024-12-01 15:03:44.324908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:11.336 [2024-12-01 15:03:44.328804] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb2420) with pdu=0x2000190fef90 00:23:11.336 [2024-12-01 15:03:44.329061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.336 [2024-12-01 15:03:44.329124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:11.336 [2024-12-01 15:03:44.332877] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb2420) with pdu=0x2000190fef90 00:23:11.336 [2024-12-01 15:03:44.332967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.336 [2024-12-01 15:03:44.332988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:11.336 [2024-12-01 15:03:44.337103] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb2420) with pdu=0x2000190fef90 00:23:11.336 [2024-12-01 15:03:44.337227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.336 [2024-12-01 15:03:44.337248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:11.336 [2024-12-01 15:03:44.341224] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb2420) with pdu=0x2000190fef90 00:23:11.336 [2024-12-01 15:03:44.341321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.336 [2024-12-01 15:03:44.341342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:11.336 [2024-12-01 15:03:44.345348] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb2420) with pdu=0x2000190fef90 00:23:11.336 [2024-12-01 15:03:44.345522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.336 [2024-12-01 15:03:44.345544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:11.336 [2024-12-01 15:03:44.349572] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb2420) with pdu=0x2000190fef90 00:23:11.336 [2024-12-01 15:03:44.349678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.336 [2024-12-01 15:03:44.349700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:11.336 [2024-12-01 15:03:44.353663] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb2420) with pdu=0x2000190fef90 00:23:11.336 [2024-12-01 15:03:44.353778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.336 [2024-12-01 15:03:44.353811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:11.336 [2024-12-01 15:03:44.357940] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb2420) with pdu=0x2000190fef90 00:23:11.336 [2024-12-01 15:03:44.358112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.336 [2024-12-01 15:03:44.358134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:11.336 [2024-12-01 15:03:44.362076] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb2420) with pdu=0x2000190fef90 00:23:11.336 [2024-12-01 15:03:44.362399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.336 [2024-12-01 15:03:44.362445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:11.336 [2024-12-01 15:03:44.366196] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb2420) with pdu=0x2000190fef90 00:23:11.336 [2024-12-01 15:03:44.366320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.336 [2024-12-01 15:03:44.366341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:11.336 [2024-12-01 15:03:44.370450] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb2420) with pdu=0x2000190fef90 00:23:11.336 [2024-12-01 15:03:44.370639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.336 [2024-12-01 15:03:44.370659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:11.336 [2024-12-01 15:03:44.374565] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb2420) with pdu=0x2000190fef90 00:23:11.336 [2024-12-01 15:03:44.374695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.336 [2024-12-01 15:03:44.374715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:11.336 [2024-12-01 15:03:44.378692] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb2420) with pdu=0x2000190fef90 00:23:11.336 [2024-12-01 15:03:44.378876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.336 [2024-12-01 15:03:44.378898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:11.336 [2024-12-01 15:03:44.382832] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb2420) with pdu=0x2000190fef90 00:23:11.337 [2024-12-01 15:03:44.382973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.337 [2024-12-01 15:03:44.382994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:11.337 [2024-12-01 15:03:44.386953] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb2420) with pdu=0x2000190fef90 00:23:11.337 [2024-12-01 15:03:44.387069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.337 [2024-12-01 15:03:44.387090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:11.337 [2024-12-01 15:03:44.391133] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb2420) with pdu=0x2000190fef90 00:23:11.337 [2024-12-01 15:03:44.391296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.337 [2024-12-01 15:03:44.391317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:11.337 [2024-12-01 15:03:44.395232] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb2420) with pdu=0x2000190fef90 00:23:11.337 [2024-12-01 15:03:44.395481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.337 [2024-12-01 15:03:44.395501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:11.337 [2024-12-01 15:03:44.399594] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb2420) with pdu=0x2000190fef90 00:23:11.337 [2024-12-01 15:03:44.399698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.337 [2024-12-01 15:03:44.399719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:11.337 [2024-12-01 15:03:44.403990] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb2420) with pdu=0x2000190fef90 00:23:11.337 [2024-12-01 15:03:44.404119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.337 [2024-12-01 15:03:44.404140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:11.337 [2024-12-01 15:03:44.408090] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb2420) with pdu=0x2000190fef90 00:23:11.337 [2024-12-01 15:03:44.408212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.337 [2024-12-01 15:03:44.408233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:11.337 [2024-12-01 15:03:44.412196] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb2420) with pdu=0x2000190fef90 00:23:11.337 [2024-12-01 15:03:44.412336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.337 [2024-12-01 15:03:44.412357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:11.337 [2024-12-01 15:03:44.416281] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb2420) with pdu=0x2000190fef90 00:23:11.337 [2024-12-01 15:03:44.416379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.337 [2024-12-01 15:03:44.416400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:11.337 [2024-12-01 15:03:44.420402] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb2420) with pdu=0x2000190fef90 00:23:11.337 [2024-12-01 15:03:44.420503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.337 [2024-12-01 15:03:44.420524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:11.337 [2024-12-01 15:03:44.424481] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb2420) with pdu=0x2000190fef90 00:23:11.337 [2024-12-01 15:03:44.424646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.337 [2024-12-01 15:03:44.424667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:11.337 [2024-12-01 15:03:44.428692] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb2420) with pdu=0x2000190fef90 00:23:11.337 [2024-12-01 15:03:44.428938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.337 [2024-12-01 15:03:44.428961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:11.337 [2024-12-01 15:03:44.432814] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb2420) with pdu=0x2000190fef90 00:23:11.337 [2024-12-01 15:03:44.432993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.337 [2024-12-01 15:03:44.433013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:11.337 [2024-12-01 15:03:44.436987] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb2420) with pdu=0x2000190fef90 00:23:11.337 [2024-12-01 15:03:44.437161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.337 [2024-12-01 15:03:44.437182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:11.337 [2024-12-01 15:03:44.441015] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb2420) with pdu=0x2000190fef90 00:23:11.337 [2024-12-01 15:03:44.441109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.337 [2024-12-01 15:03:44.441129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:11.337 [2024-12-01 15:03:44.445509] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb2420) with pdu=0x2000190fef90 00:23:11.337 [2024-12-01 15:03:44.445667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.337 [2024-12-01 15:03:44.445688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:11.597 [2024-12-01 15:03:44.450107] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb2420) with pdu=0x2000190fef90 00:23:11.597 [2024-12-01 15:03:44.450227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.597 [2024-12-01 15:03:44.450247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:11.597 [2024-12-01 15:03:44.454511] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb2420) with pdu=0x2000190fef90 00:23:11.597 [2024-12-01 15:03:44.454649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.597 [2024-12-01 15:03:44.454669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:11.597 [2024-12-01 15:03:44.458872] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb2420) with pdu=0x2000190fef90 00:23:11.597 [2024-12-01 15:03:44.459055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.597 [2024-12-01 15:03:44.459075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:11.597 [2024-12-01 15:03:44.463060] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb2420) with pdu=0x2000190fef90 00:23:11.597 [2024-12-01 15:03:44.463329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.597 [2024-12-01 15:03:44.463375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:11.597 [2024-12-01 15:03:44.467190] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb2420) with pdu=0x2000190fef90 00:23:11.597 [2024-12-01 15:03:44.467310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.597 [2024-12-01 15:03:44.467332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:11.597 [2024-12-01 15:03:44.471428] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb2420) with pdu=0x2000190fef90 00:23:11.597 [2024-12-01 15:03:44.471564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.597 [2024-12-01 15:03:44.471585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:11.597 [2024-12-01 15:03:44.475603] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb2420) with pdu=0x2000190fef90 00:23:11.597 [2024-12-01 15:03:44.475741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.597 [2024-12-01 15:03:44.475761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:11.597 [2024-12-01 15:03:44.479743] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb2420) with pdu=0x2000190fef90 00:23:11.597 [2024-12-01 15:03:44.479906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.597 [2024-12-01 15:03:44.479927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:11.597 [2024-12-01 15:03:44.483893] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb2420) with pdu=0x2000190fef90 00:23:11.597 [2024-12-01 15:03:44.484018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.597 [2024-12-01 15:03:44.484039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:11.597 [2024-12-01 15:03:44.488000] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb2420) with pdu=0x2000190fef90 00:23:11.597 [2024-12-01 15:03:44.488121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.597 [2024-12-01 15:03:44.488141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:11.597 [2024-12-01 15:03:44.492152] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb2420) with pdu=0x2000190fef90 00:23:11.598 [2024-12-01 15:03:44.492314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.598 [2024-12-01 15:03:44.492335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:11.598 [2024-12-01 15:03:44.496197] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb2420) with pdu=0x2000190fef90 00:23:11.598 [2024-12-01 15:03:44.496448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.598 [2024-12-01 15:03:44.496510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:11.598 [2024-12-01 15:03:44.500222] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb2420) with pdu=0x2000190fef90 00:23:11.598 [2024-12-01 15:03:44.500368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.598 [2024-12-01 15:03:44.500388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:11.598 [2024-12-01 15:03:44.504397] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb2420) with pdu=0x2000190fef90 00:23:11.598 [2024-12-01 15:03:44.504582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.598 [2024-12-01 15:03:44.504603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:11.598 [2024-12-01 15:03:44.508500] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb2420) with pdu=0x2000190fef90 00:23:11.598 [2024-12-01 15:03:44.508695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.598 [2024-12-01 15:03:44.508715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:11.598 [2024-12-01 15:03:44.512646] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb2420) with pdu=0x2000190fef90 00:23:11.598 [2024-12-01 15:03:44.512801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.598 [2024-12-01 15:03:44.512823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:11.598 [2024-12-01 15:03:44.516770] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb2420) with pdu=0x2000190fef90 00:23:11.598 [2024-12-01 15:03:44.516877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.598 [2024-12-01 15:03:44.516904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:11.598 [2024-12-01 15:03:44.520859] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb2420) with pdu=0x2000190fef90 00:23:11.598 [2024-12-01 15:03:44.520953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.598 [2024-12-01 15:03:44.520973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:11.598 [2024-12-01 15:03:44.524997] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb2420) with pdu=0x2000190fef90 00:23:11.598 [2024-12-01 15:03:44.525165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.598 [2024-12-01 15:03:44.525186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:11.598 [2024-12-01 15:03:44.529153] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb2420) with pdu=0x2000190fef90 00:23:11.598 [2024-12-01 15:03:44.529414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.598 [2024-12-01 15:03:44.529486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:11.598 [2024-12-01 15:03:44.533236] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb2420) with pdu=0x2000190fef90 00:23:11.598 [2024-12-01 15:03:44.533345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.598 [2024-12-01 15:03:44.533366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:11.598 [2024-12-01 15:03:44.537514] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb2420) with pdu=0x2000190fef90 00:23:11.598 [2024-12-01 15:03:44.537663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.598 [2024-12-01 15:03:44.537684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:11.598 [2024-12-01 15:03:44.541560] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb2420) with pdu=0x2000190fef90 00:23:11.598 [2024-12-01 15:03:44.541652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.598 [2024-12-01 15:03:44.541673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:11.598 [2024-12-01 15:03:44.545675] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb2420) with pdu=0x2000190fef90 00:23:11.598 [2024-12-01 15:03:44.545839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.598 [2024-12-01 15:03:44.545861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:11.598 [2024-12-01 15:03:44.549781] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb2420) with pdu=0x2000190fef90 00:23:11.598 [2024-12-01 15:03:44.549940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.598 [2024-12-01 15:03:44.549961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:11.598 [2024-12-01 15:03:44.553863] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb2420) with pdu=0x2000190fef90 00:23:11.598 [2024-12-01 15:03:44.553973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.598 [2024-12-01 15:03:44.553995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:11.598 [2024-12-01 15:03:44.557991] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb2420) with pdu=0x2000190fef90 00:23:11.598 [2024-12-01 15:03:44.558157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.598 [2024-12-01 15:03:44.558177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:11.598 [2024-12-01 15:03:44.562193] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb2420) with pdu=0x2000190fef90 00:23:11.598 [2024-12-01 15:03:44.562402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.598 [2024-12-01 15:03:44.562423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:11.598 [2024-12-01 15:03:44.566387] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb2420) with pdu=0x2000190fef90 00:23:11.598 [2024-12-01 15:03:44.566591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.598 [2024-12-01 15:03:44.566611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:11.598 [2024-12-01 15:03:44.570509] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb2420) with pdu=0x2000190fef90 00:23:11.598 [2024-12-01 15:03:44.570630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.598 [2024-12-01 15:03:44.570652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:11.598 [2024-12-01 15:03:44.574651] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb2420) with pdu=0x2000190fef90 00:23:11.598 [2024-12-01 15:03:44.574750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.598 [2024-12-01 15:03:44.574785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:11.598 [2024-12-01 15:03:44.578910] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb2420) with pdu=0x2000190fef90 00:23:11.598 [2024-12-01 15:03:44.579053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.598 [2024-12-01 15:03:44.579074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:11.598 [2024-12-01 15:03:44.583078] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb2420) with pdu=0x2000190fef90 00:23:11.598 [2024-12-01 15:03:44.583205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.598 [2024-12-01 15:03:44.583226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:11.598 [2024-12-01 15:03:44.587095] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb2420) with pdu=0x2000190fef90 00:23:11.598 [2024-12-01 15:03:44.587226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.598 [2024-12-01 15:03:44.587247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:11.598 [2024-12-01 15:03:44.591184] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb2420) with pdu=0x2000190fef90 00:23:11.598 [2024-12-01 15:03:44.591337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.598 [2024-12-01 15:03:44.591358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:11.598 [2024-12-01 15:03:44.595299] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb2420) with pdu=0x2000190fef90 00:23:11.598 [2024-12-01 15:03:44.595471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.598 [2024-12-01 15:03:44.595492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:11.598 [2024-12-01 15:03:44.599387] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb2420) with pdu=0x2000190fef90 00:23:11.598 [2024-12-01 15:03:44.599529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.598 [2024-12-01 15:03:44.599550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:11.598 [2024-12-01 15:03:44.603589] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb2420) with pdu=0x2000190fef90 00:23:11.599 [2024-12-01 15:03:44.603721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.599 [2024-12-01 15:03:44.603742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:11.599 [2024-12-01 15:03:44.607778] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb2420) with pdu=0x2000190fef90 00:23:11.599 [2024-12-01 15:03:44.607871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.599 [2024-12-01 15:03:44.607892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:11.599 [2024-12-01 15:03:44.611914] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb2420) with pdu=0x2000190fef90 00:23:11.599 [2024-12-01 15:03:44.612095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.599 [2024-12-01 15:03:44.612116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:11.599 [2024-12-01 15:03:44.616074] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb2420) with pdu=0x2000190fef90 00:23:11.599 [2024-12-01 15:03:44.616191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.599 [2024-12-01 15:03:44.616213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:11.599 [2024-12-01 15:03:44.620131] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb2420) with pdu=0x2000190fef90 00:23:11.599 [2024-12-01 15:03:44.620237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.599 [2024-12-01 15:03:44.620258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:11.599 [2024-12-01 15:03:44.624267] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb2420) with pdu=0x2000190fef90 00:23:11.599 [2024-12-01 15:03:44.624442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.599 [2024-12-01 15:03:44.624463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:11.599 [2024-12-01 15:03:44.628398] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb2420) with pdu=0x2000190fef90 00:23:11.599 [2024-12-01 15:03:44.628647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.599 [2024-12-01 15:03:44.628696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:11.599 [2024-12-01 15:03:44.632529] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb2420) with pdu=0x2000190fef90 00:23:11.599 [2024-12-01 15:03:44.632657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.599 [2024-12-01 15:03:44.632677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:11.599 [2024-12-01 15:03:44.636697] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb2420) with pdu=0x2000190fef90 00:23:11.599 [2024-12-01 15:03:44.636895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.599 [2024-12-01 15:03:44.636917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:11.599 [2024-12-01 15:03:44.640916] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb2420) with pdu=0x2000190fef90 00:23:11.599 [2024-12-01 15:03:44.641176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.599 [2024-12-01 15:03:44.641208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:11.599 [2024-12-01 15:03:44.644965] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb2420) with pdu=0x2000190fef90 00:23:11.599 [2024-12-01 15:03:44.645097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.599 [2024-12-01 15:03:44.645118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:11.599 [2024-12-01 15:03:44.649145] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb2420) with pdu=0x2000190fef90 00:23:11.599 [2024-12-01 15:03:44.649293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.599 [2024-12-01 15:03:44.649315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:11.599 [2024-12-01 15:03:44.653237] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb2420) with pdu=0x2000190fef90 00:23:11.599 [2024-12-01 15:03:44.653374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.599 [2024-12-01 15:03:44.653395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:11.599 [2024-12-01 15:03:44.657364] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb2420) with pdu=0x2000190fef90 00:23:11.599 [2024-12-01 15:03:44.657568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.599 [2024-12-01 15:03:44.657590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:11.599 [2024-12-01 15:03:44.661558] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb2420) with pdu=0x2000190fef90 00:23:11.599 [2024-12-01 15:03:44.661668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.599 [2024-12-01 15:03:44.661689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:11.599 [2024-12-01 15:03:44.665620] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb2420) with pdu=0x2000190fef90 00:23:11.599 [2024-12-01 15:03:44.665754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.599 [2024-12-01 15:03:44.665792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:11.599 [2024-12-01 15:03:44.669784] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb2420) with pdu=0x2000190fef90 00:23:11.599 [2024-12-01 15:03:44.669951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.599 [2024-12-01 15:03:44.669972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:11.599 [2024-12-01 15:03:44.673867] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb2420) with pdu=0x2000190fef90 00:23:11.599 [2024-12-01 15:03:44.674161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.599 [2024-12-01 15:03:44.674188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:11.599 [2024-12-01 15:03:44.677956] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb2420) with pdu=0x2000190fef90 00:23:11.599 [2024-12-01 15:03:44.678101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.599 [2024-12-01 15:03:44.678122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:11.599 [2024-12-01 15:03:44.682181] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb2420) with pdu=0x2000190fef90 00:23:11.599 [2024-12-01 15:03:44.682291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.599 [2024-12-01 15:03:44.682311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:11.599 [2024-12-01 15:03:44.686288] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb2420) with pdu=0x2000190fef90 00:23:11.599 [2024-12-01 15:03:44.686372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.599 [2024-12-01 15:03:44.686392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:11.599 [2024-12-01 15:03:44.690508] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb2420) with pdu=0x2000190fef90 00:23:11.599 [2024-12-01 15:03:44.690639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.599 [2024-12-01 15:03:44.690659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:11.599 [2024-12-01 15:03:44.694633] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb2420) with pdu=0x2000190fef90 00:23:11.599 [2024-12-01 15:03:44.694713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.599 [2024-12-01 15:03:44.694733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:11.599 [2024-12-01 15:03:44.698849] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb2420) with pdu=0x2000190fef90 00:23:11.599 [2024-12-01 15:03:44.698940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.599 [2024-12-01 15:03:44.698961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:11.599 [2024-12-01 15:03:44.703137] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb2420) with pdu=0x2000190fef90 00:23:11.599 [2024-12-01 15:03:44.703288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.599 [2024-12-01 15:03:44.703309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:11.599 [2024-12-01 15:03:44.707564] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb2420) with pdu=0x2000190fef90 00:23:11.599 [2024-12-01 15:03:44.707752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.599 [2024-12-01 15:03:44.707775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:11.858 [2024-12-01 15:03:44.712411] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb2420) with pdu=0x2000190fef90 00:23:11.858 [2024-12-01 15:03:44.712611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.858 [2024-12-01 15:03:44.712631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:11.858 [2024-12-01 15:03:44.717138] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb2420) with pdu=0x2000190fef90 00:23:11.858 [2024-12-01 15:03:44.717377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.858 [2024-12-01 15:03:44.717551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:11.858 [2024-12-01 15:03:44.721506] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb2420) with pdu=0x2000190fef90 00:23:11.858 [2024-12-01 15:03:44.721638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.858 [2024-12-01 15:03:44.721661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:11.858 [2024-12-01 15:03:44.725702] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb2420) with pdu=0x2000190fef90 00:23:11.858 [2024-12-01 15:03:44.725909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.858 [2024-12-01 15:03:44.725931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:11.858 [2024-12-01 15:03:44.729874] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb2420) with pdu=0x2000190fef90 00:23:11.858 [2024-12-01 15:03:44.730053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.858 [2024-12-01 15:03:44.730074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:11.858 [2024-12-01 15:03:44.733973] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb2420) with pdu=0x2000190fef90 00:23:11.858 [2024-12-01 15:03:44.734047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.858 [2024-12-01 15:03:44.734068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:11.858 00:23:11.858 Latency(us) 00:23:11.858 [2024-12-01T15:03:44.973Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:11.858 [2024-12-01T15:03:44.973Z] Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:23:11.858 nvme0n1 : 2.00 7371.52 921.44 0.00 0.00 2165.57 1638.40 6732.33 00:23:11.858 [2024-12-01T15:03:44.973Z] =================================================================================================================== 00:23:11.858 [2024-12-01T15:03:44.973Z] Total : 7371.52 921.44 0.00 0.00 2165.57 1638.40 6732.33 00:23:11.858 0 00:23:11.858 15:03:44 -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:23:11.858 15:03:44 -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:23:11.858 15:03:44 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:23:11.858 15:03:44 -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:23:11.858 | .driver_specific 00:23:11.858 | .nvme_error 00:23:11.858 | .status_code 00:23:11.858 | .command_transient_transport_error' 00:23:11.858 15:03:44 -- host/digest.sh@71 -- # (( 476 > 0 )) 00:23:11.858 15:03:44 -- host/digest.sh@73 -- # killprocess 98073 00:23:11.858 15:03:44 -- common/autotest_common.sh@936 -- # '[' -z 98073 ']' 00:23:11.858 15:03:44 -- common/autotest_common.sh@940 -- # kill -0 98073 00:23:12.118 15:03:44 -- common/autotest_common.sh@941 -- # uname 00:23:12.118 15:03:44 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:23:12.118 15:03:44 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 98073 00:23:12.118 killing process with pid 98073 00:23:12.118 Received shutdown signal, test time was about 2.000000 seconds 00:23:12.118 00:23:12.118 Latency(us) 00:23:12.118 [2024-12-01T15:03:45.233Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:12.118 [2024-12-01T15:03:45.233Z] =================================================================================================================== 00:23:12.118 [2024-12-01T15:03:45.233Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:12.118 15:03:45 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:23:12.118 15:03:45 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:23:12.118 15:03:45 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 98073' 00:23:12.118 15:03:45 -- common/autotest_common.sh@955 -- # kill 98073 00:23:12.118 15:03:45 -- common/autotest_common.sh@960 -- # wait 98073 00:23:12.380 15:03:45 -- host/digest.sh@115 -- # killprocess 97776 00:23:12.380 15:03:45 -- common/autotest_common.sh@936 -- # '[' -z 97776 ']' 00:23:12.380 15:03:45 -- common/autotest_common.sh@940 -- # kill -0 97776 00:23:12.380 15:03:45 -- common/autotest_common.sh@941 -- # uname 00:23:12.380 15:03:45 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:23:12.380 15:03:45 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 97776 00:23:12.380 killing process with pid 97776 00:23:12.380 15:03:45 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:23:12.380 15:03:45 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:23:12.380 15:03:45 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 97776' 00:23:12.380 15:03:45 -- common/autotest_common.sh@955 -- # kill 97776 00:23:12.380 15:03:45 -- common/autotest_common.sh@960 -- # wait 97776 00:23:12.380 00:23:12.380 real 0m17.352s 00:23:12.380 user 0m32.068s 00:23:12.380 sys 0m5.410s 00:23:12.380 15:03:45 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:23:12.380 15:03:45 -- common/autotest_common.sh@10 -- # set +x 00:23:12.380 ************************************ 00:23:12.380 END TEST nvmf_digest_error 00:23:12.380 ************************************ 00:23:12.638 15:03:45 -- host/digest.sh@138 -- # trap - SIGINT SIGTERM EXIT 00:23:12.638 15:03:45 -- host/digest.sh@139 -- # nvmftestfini 00:23:12.638 15:03:45 -- nvmf/common.sh@476 -- # nvmfcleanup 00:23:12.638 15:03:45 -- nvmf/common.sh@116 -- # sync 00:23:12.638 15:03:45 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:23:12.638 15:03:45 -- nvmf/common.sh@119 -- # set +e 00:23:12.638 15:03:45 -- nvmf/common.sh@120 -- # for i in {1..20} 00:23:12.638 15:03:45 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:23:12.638 rmmod nvme_tcp 00:23:12.638 rmmod nvme_fabrics 00:23:12.638 rmmod nvme_keyring 00:23:12.638 15:03:45 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:23:12.638 15:03:45 -- nvmf/common.sh@123 -- # set -e 00:23:12.638 15:03:45 -- nvmf/common.sh@124 -- # return 0 00:23:12.638 15:03:45 -- nvmf/common.sh@477 -- # '[' -n 97776 ']' 00:23:12.638 15:03:45 -- nvmf/common.sh@478 -- # killprocess 97776 00:23:12.638 15:03:45 -- common/autotest_common.sh@936 -- # '[' -z 97776 ']' 00:23:12.638 15:03:45 -- common/autotest_common.sh@940 -- # kill -0 97776 00:23:12.638 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 940: kill: (97776) - No such process 00:23:12.638 15:03:45 -- common/autotest_common.sh@963 -- # echo 'Process with pid 97776 is not found' 00:23:12.638 Process with pid 97776 is not found 00:23:12.638 15:03:45 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:23:12.638 15:03:45 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:23:12.638 15:03:45 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:23:12.638 15:03:45 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:23:12.638 15:03:45 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:23:12.638 15:03:45 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:12.638 15:03:45 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:12.638 15:03:45 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:12.638 15:03:45 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:23:12.638 ************************************ 00:23:12.638 END TEST nvmf_digest 00:23:12.638 ************************************ 00:23:12.638 00:23:12.638 real 0m35.397s 00:23:12.638 user 1m3.855s 00:23:12.638 sys 0m11.400s 00:23:12.638 15:03:45 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:23:12.638 15:03:45 -- common/autotest_common.sh@10 -- # set +x 00:23:12.638 15:03:45 -- nvmf/nvmf.sh@110 -- # [[ 1 -eq 1 ]] 00:23:12.638 15:03:45 -- nvmf/nvmf.sh@110 -- # [[ tcp == \t\c\p ]] 00:23:12.638 15:03:45 -- nvmf/nvmf.sh@112 -- # run_test nvmf_mdns_discovery /home/vagrant/spdk_repo/spdk/test/nvmf/host/mdns_discovery.sh --transport=tcp 00:23:12.638 15:03:45 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:23:12.638 15:03:45 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:23:12.638 15:03:45 -- common/autotest_common.sh@10 -- # set +x 00:23:12.638 ************************************ 00:23:12.638 START TEST nvmf_mdns_discovery 00:23:12.638 ************************************ 00:23:12.638 15:03:45 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/mdns_discovery.sh --transport=tcp 00:23:12.897 * Looking for test storage... 00:23:12.897 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:23:12.897 15:03:45 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:23:12.897 15:03:45 -- common/autotest_common.sh@1690 -- # lcov --version 00:23:12.897 15:03:45 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:23:12.897 15:03:45 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:23:12.897 15:03:45 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:23:12.897 15:03:45 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:23:12.897 15:03:45 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:23:12.897 15:03:45 -- scripts/common.sh@335 -- # IFS=.-: 00:23:12.897 15:03:45 -- scripts/common.sh@335 -- # read -ra ver1 00:23:12.897 15:03:45 -- scripts/common.sh@336 -- # IFS=.-: 00:23:12.897 15:03:45 -- scripts/common.sh@336 -- # read -ra ver2 00:23:12.897 15:03:45 -- scripts/common.sh@337 -- # local 'op=<' 00:23:12.897 15:03:45 -- scripts/common.sh@339 -- # ver1_l=2 00:23:12.897 15:03:45 -- scripts/common.sh@340 -- # ver2_l=1 00:23:12.897 15:03:45 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:23:12.897 15:03:45 -- scripts/common.sh@343 -- # case "$op" in 00:23:12.897 15:03:45 -- scripts/common.sh@344 -- # : 1 00:23:12.897 15:03:45 -- scripts/common.sh@363 -- # (( v = 0 )) 00:23:12.897 15:03:45 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:12.897 15:03:45 -- scripts/common.sh@364 -- # decimal 1 00:23:12.897 15:03:45 -- scripts/common.sh@352 -- # local d=1 00:23:12.897 15:03:45 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:12.897 15:03:45 -- scripts/common.sh@354 -- # echo 1 00:23:12.897 15:03:45 -- scripts/common.sh@364 -- # ver1[v]=1 00:23:12.897 15:03:45 -- scripts/common.sh@365 -- # decimal 2 00:23:12.897 15:03:45 -- scripts/common.sh@352 -- # local d=2 00:23:12.897 15:03:45 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:12.897 15:03:45 -- scripts/common.sh@354 -- # echo 2 00:23:12.897 15:03:45 -- scripts/common.sh@365 -- # ver2[v]=2 00:23:12.897 15:03:45 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:23:12.897 15:03:45 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:23:12.897 15:03:45 -- scripts/common.sh@367 -- # return 0 00:23:12.897 15:03:45 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:12.897 15:03:45 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:23:12.897 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:12.897 --rc genhtml_branch_coverage=1 00:23:12.897 --rc genhtml_function_coverage=1 00:23:12.897 --rc genhtml_legend=1 00:23:12.897 --rc geninfo_all_blocks=1 00:23:12.897 --rc geninfo_unexecuted_blocks=1 00:23:12.897 00:23:12.897 ' 00:23:12.897 15:03:45 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:23:12.897 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:12.897 --rc genhtml_branch_coverage=1 00:23:12.897 --rc genhtml_function_coverage=1 00:23:12.897 --rc genhtml_legend=1 00:23:12.897 --rc geninfo_all_blocks=1 00:23:12.897 --rc geninfo_unexecuted_blocks=1 00:23:12.897 00:23:12.897 ' 00:23:12.897 15:03:45 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:23:12.897 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:12.897 --rc genhtml_branch_coverage=1 00:23:12.897 --rc genhtml_function_coverage=1 00:23:12.897 --rc genhtml_legend=1 00:23:12.897 --rc geninfo_all_blocks=1 00:23:12.897 --rc geninfo_unexecuted_blocks=1 00:23:12.897 00:23:12.897 ' 00:23:12.897 15:03:45 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:23:12.897 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:12.897 --rc genhtml_branch_coverage=1 00:23:12.897 --rc genhtml_function_coverage=1 00:23:12.897 --rc genhtml_legend=1 00:23:12.897 --rc geninfo_all_blocks=1 00:23:12.897 --rc geninfo_unexecuted_blocks=1 00:23:12.897 00:23:12.897 ' 00:23:12.897 15:03:45 -- host/mdns_discovery.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:23:12.897 15:03:45 -- nvmf/common.sh@7 -- # uname -s 00:23:12.897 15:03:45 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:12.897 15:03:45 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:12.897 15:03:45 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:12.897 15:03:45 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:12.897 15:03:45 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:12.897 15:03:45 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:12.897 15:03:45 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:12.897 15:03:45 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:12.897 15:03:45 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:12.897 15:03:45 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:12.897 15:03:45 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:2d843004-a791-47f3-8dd7-3d04462c368b 00:23:12.897 15:03:45 -- nvmf/common.sh@18 -- # NVME_HOSTID=2d843004-a791-47f3-8dd7-3d04462c368b 00:23:12.897 15:03:45 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:12.897 15:03:45 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:12.897 15:03:45 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:23:12.897 15:03:45 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:23:12.897 15:03:45 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:12.897 15:03:45 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:12.897 15:03:45 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:12.897 15:03:45 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:12.897 15:03:45 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:12.897 15:03:45 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:12.897 15:03:45 -- paths/export.sh@5 -- # export PATH 00:23:12.897 15:03:45 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:12.897 15:03:45 -- nvmf/common.sh@46 -- # : 0 00:23:12.897 15:03:45 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:23:12.897 15:03:45 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:23:12.897 15:03:45 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:23:12.897 15:03:45 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:12.898 15:03:45 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:12.898 15:03:45 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:23:12.898 15:03:45 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:23:12.898 15:03:45 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:23:12.898 15:03:45 -- host/mdns_discovery.sh@12 -- # DISCOVERY_FILTER=address 00:23:12.898 15:03:45 -- host/mdns_discovery.sh@13 -- # DISCOVERY_PORT=8009 00:23:12.898 15:03:45 -- host/mdns_discovery.sh@14 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:23:12.898 15:03:45 -- host/mdns_discovery.sh@17 -- # NQN=nqn.2016-06.io.spdk:cnode 00:23:12.898 15:03:45 -- host/mdns_discovery.sh@18 -- # NQN2=nqn.2016-06.io.spdk:cnode2 00:23:12.898 15:03:45 -- host/mdns_discovery.sh@20 -- # HOST_NQN=nqn.2021-12.io.spdk:test 00:23:12.898 15:03:45 -- host/mdns_discovery.sh@21 -- # HOST_SOCK=/tmp/host.sock 00:23:12.898 15:03:45 -- host/mdns_discovery.sh@23 -- # nvmftestinit 00:23:12.898 15:03:45 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:23:12.898 15:03:45 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:12.898 15:03:45 -- nvmf/common.sh@436 -- # prepare_net_devs 00:23:12.898 15:03:45 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:23:12.898 15:03:45 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:23:12.898 15:03:45 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:12.898 15:03:45 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:12.898 15:03:45 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:12.898 15:03:45 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:23:12.898 15:03:45 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:23:12.898 15:03:45 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:23:12.898 15:03:45 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:23:12.898 15:03:45 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:23:12.898 15:03:45 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:23:12.898 15:03:45 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:12.898 15:03:45 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:12.898 15:03:45 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:23:12.898 15:03:45 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:23:12.898 15:03:45 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:23:12.898 15:03:45 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:23:12.898 15:03:45 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:23:12.898 15:03:45 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:12.898 15:03:45 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:23:12.898 15:03:45 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:23:12.898 15:03:45 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:23:12.898 15:03:45 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:23:12.898 15:03:45 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:23:12.898 15:03:45 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:23:12.898 Cannot find device "nvmf_tgt_br" 00:23:12.898 15:03:45 -- nvmf/common.sh@154 -- # true 00:23:12.898 15:03:45 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:23:12.898 Cannot find device "nvmf_tgt_br2" 00:23:12.898 15:03:45 -- nvmf/common.sh@155 -- # true 00:23:12.898 15:03:45 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:23:12.898 15:03:45 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:23:12.898 Cannot find device "nvmf_tgt_br" 00:23:12.898 15:03:46 -- nvmf/common.sh@157 -- # true 00:23:12.898 15:03:46 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:23:13.156 Cannot find device "nvmf_tgt_br2" 00:23:13.156 15:03:46 -- nvmf/common.sh@158 -- # true 00:23:13.156 15:03:46 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:23:13.156 15:03:46 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:23:13.156 15:03:46 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:23:13.156 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:23:13.156 15:03:46 -- nvmf/common.sh@161 -- # true 00:23:13.156 15:03:46 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:23:13.156 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:23:13.156 15:03:46 -- nvmf/common.sh@162 -- # true 00:23:13.156 15:03:46 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:23:13.156 15:03:46 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:23:13.156 15:03:46 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:23:13.156 15:03:46 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:23:13.156 15:03:46 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:23:13.156 15:03:46 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:23:13.156 15:03:46 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:23:13.156 15:03:46 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:23:13.156 15:03:46 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:23:13.156 15:03:46 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:23:13.156 15:03:46 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:23:13.156 15:03:46 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:23:13.156 15:03:46 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:23:13.156 15:03:46 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:23:13.156 15:03:46 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:23:13.156 15:03:46 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:23:13.156 15:03:46 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:23:13.156 15:03:46 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:23:13.156 15:03:46 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:23:13.156 15:03:46 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:23:13.156 15:03:46 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:23:13.156 15:03:46 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:23:13.157 15:03:46 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:23:13.157 15:03:46 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:23:13.157 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:13.157 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.063 ms 00:23:13.157 00:23:13.157 --- 10.0.0.2 ping statistics --- 00:23:13.157 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:13.157 rtt min/avg/max/mdev = 0.063/0.063/0.063/0.000 ms 00:23:13.157 15:03:46 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:23:13.415 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:23:13.415 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.092 ms 00:23:13.415 00:23:13.415 --- 10.0.0.3 ping statistics --- 00:23:13.415 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:13.415 rtt min/avg/max/mdev = 0.092/0.092/0.092/0.000 ms 00:23:13.415 15:03:46 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:23:13.415 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:13.415 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.018 ms 00:23:13.415 00:23:13.415 --- 10.0.0.1 ping statistics --- 00:23:13.415 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:13.415 rtt min/avg/max/mdev = 0.018/0.018/0.018/0.000 ms 00:23:13.415 15:03:46 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:13.415 15:03:46 -- nvmf/common.sh@421 -- # return 0 00:23:13.415 15:03:46 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:23:13.415 15:03:46 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:13.415 15:03:46 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:23:13.415 15:03:46 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:23:13.415 15:03:46 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:13.415 15:03:46 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:23:13.415 15:03:46 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:23:13.415 15:03:46 -- host/mdns_discovery.sh@28 -- # nvmfappstart -m 0x2 --wait-for-rpc 00:23:13.415 15:03:46 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:23:13.415 15:03:46 -- common/autotest_common.sh@722 -- # xtrace_disable 00:23:13.415 15:03:46 -- common/autotest_common.sh@10 -- # set +x 00:23:13.415 15:03:46 -- nvmf/common.sh@469 -- # nvmfpid=98374 00:23:13.415 15:03:46 -- nvmf/common.sh@470 -- # waitforlisten 98374 00:23:13.415 15:03:46 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 --wait-for-rpc 00:23:13.415 15:03:46 -- common/autotest_common.sh@829 -- # '[' -z 98374 ']' 00:23:13.415 15:03:46 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:13.415 15:03:46 -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:13.415 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:13.415 15:03:46 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:13.415 15:03:46 -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:13.415 15:03:46 -- common/autotest_common.sh@10 -- # set +x 00:23:13.415 [2024-12-01 15:03:46.352641] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:23:13.415 [2024-12-01 15:03:46.352731] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:13.415 [2024-12-01 15:03:46.494451] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:13.674 [2024-12-01 15:03:46.606380] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:23:13.674 [2024-12-01 15:03:46.606514] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:13.674 [2024-12-01 15:03:46.606527] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:13.674 [2024-12-01 15:03:46.606535] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:13.674 [2024-12-01 15:03:46.606568] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:23:14.609 15:03:47 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:14.609 15:03:47 -- common/autotest_common.sh@862 -- # return 0 00:23:14.609 15:03:47 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:23:14.609 15:03:47 -- common/autotest_common.sh@728 -- # xtrace_disable 00:23:14.609 15:03:47 -- common/autotest_common.sh@10 -- # set +x 00:23:14.609 15:03:47 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:14.609 15:03:47 -- host/mdns_discovery.sh@30 -- # rpc_cmd nvmf_set_config --discovery-filter=address 00:23:14.609 15:03:47 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:14.609 15:03:47 -- common/autotest_common.sh@10 -- # set +x 00:23:14.609 15:03:47 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:14.609 15:03:47 -- host/mdns_discovery.sh@31 -- # rpc_cmd framework_start_init 00:23:14.609 15:03:47 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:14.609 15:03:47 -- common/autotest_common.sh@10 -- # set +x 00:23:14.609 15:03:47 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:14.609 15:03:47 -- host/mdns_discovery.sh@32 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:23:14.609 15:03:47 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:14.609 15:03:47 -- common/autotest_common.sh@10 -- # set +x 00:23:14.609 [2024-12-01 15:03:47.560411] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:14.609 15:03:47 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:14.609 15:03:47 -- host/mdns_discovery.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.2 -s 8009 00:23:14.609 15:03:47 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:14.609 15:03:47 -- common/autotest_common.sh@10 -- # set +x 00:23:14.609 [2024-12-01 15:03:47.568579] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:23:14.609 15:03:47 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:14.609 15:03:47 -- host/mdns_discovery.sh@35 -- # rpc_cmd bdev_null_create null0 1000 512 00:23:14.609 15:03:47 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:14.609 15:03:47 -- common/autotest_common.sh@10 -- # set +x 00:23:14.609 null0 00:23:14.609 15:03:47 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:14.609 15:03:47 -- host/mdns_discovery.sh@36 -- # rpc_cmd bdev_null_create null1 1000 512 00:23:14.609 15:03:47 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:14.609 15:03:47 -- common/autotest_common.sh@10 -- # set +x 00:23:14.609 null1 00:23:14.609 15:03:47 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:14.609 15:03:47 -- host/mdns_discovery.sh@37 -- # rpc_cmd bdev_null_create null2 1000 512 00:23:14.609 15:03:47 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:14.609 15:03:47 -- common/autotest_common.sh@10 -- # set +x 00:23:14.609 null2 00:23:14.609 15:03:47 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:14.609 15:03:47 -- host/mdns_discovery.sh@38 -- # rpc_cmd bdev_null_create null3 1000 512 00:23:14.609 15:03:47 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:14.609 15:03:47 -- common/autotest_common.sh@10 -- # set +x 00:23:14.609 null3 00:23:14.609 15:03:47 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:14.609 15:03:47 -- host/mdns_discovery.sh@39 -- # rpc_cmd bdev_wait_for_examine 00:23:14.609 15:03:47 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:14.609 15:03:47 -- common/autotest_common.sh@10 -- # set +x 00:23:14.610 15:03:47 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:14.610 15:03:47 -- host/mdns_discovery.sh@47 -- # hostpid=98424 00:23:14.610 15:03:47 -- host/mdns_discovery.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock 00:23:14.610 15:03:47 -- host/mdns_discovery.sh@48 -- # waitforlisten 98424 /tmp/host.sock 00:23:14.610 15:03:47 -- common/autotest_common.sh@829 -- # '[' -z 98424 ']' 00:23:14.610 15:03:47 -- common/autotest_common.sh@833 -- # local rpc_addr=/tmp/host.sock 00:23:14.610 15:03:47 -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:14.610 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:23:14.610 15:03:47 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:23:14.610 15:03:47 -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:14.610 15:03:47 -- common/autotest_common.sh@10 -- # set +x 00:23:14.610 [2024-12-01 15:03:47.670417] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:23:14.610 [2024-12-01 15:03:47.670523] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid98424 ] 00:23:14.869 [2024-12-01 15:03:47.815014] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:14.869 [2024-12-01 15:03:47.877832] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:23:14.869 [2024-12-01 15:03:47.878296] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:23:15.804 15:03:48 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:15.804 15:03:48 -- common/autotest_common.sh@862 -- # return 0 00:23:15.804 15:03:48 -- host/mdns_discovery.sh@50 -- # trap 'process_shm --id $NVMF_APP_SHM_ID;exit 1' SIGINT SIGTERM 00:23:15.804 15:03:48 -- host/mdns_discovery.sh@51 -- # trap 'process_shm --id $NVMF_APP_SHM_ID;nvmftestfini;kill $hostpid;kill $avahi_clientpid;kill $avahipid;' EXIT 00:23:15.804 15:03:48 -- host/mdns_discovery.sh@55 -- # avahi-daemon --kill 00:23:15.804 15:03:48 -- host/mdns_discovery.sh@57 -- # avahipid=98454 00:23:15.804 15:03:48 -- host/mdns_discovery.sh@58 -- # sleep 1 00:23:15.804 15:03:48 -- host/mdns_discovery.sh@56 -- # ip netns exec nvmf_tgt_ns_spdk avahi-daemon -f /dev/fd/63 00:23:15.804 15:03:48 -- host/mdns_discovery.sh@56 -- # echo -e '[server]\nallow-interfaces=nvmf_tgt_if,nvmf_tgt_if2\nuse-ipv4=yes\nuse-ipv6=no' 00:23:15.804 Process 1062 died: No such process; trying to remove PID file. (/run/avahi-daemon//pid) 00:23:15.804 Found user 'avahi' (UID 70) and group 'avahi' (GID 70). 00:23:15.804 Successfully dropped root privileges. 00:23:15.804 avahi-daemon 0.8 starting up. 00:23:15.804 WARNING: No NSS support for mDNS detected, consider installing nss-mdns! 00:23:15.804 Successfully called chroot(). 00:23:15.804 Successfully dropped remaining capabilities. 00:23:16.738 No service file found in /etc/avahi/services. 00:23:16.738 Joining mDNS multicast group on interface nvmf_tgt_if2.IPv4 with address 10.0.0.3. 00:23:16.738 New relevant interface nvmf_tgt_if2.IPv4 for mDNS. 00:23:16.738 Joining mDNS multicast group on interface nvmf_tgt_if.IPv4 with address 10.0.0.2. 00:23:16.738 New relevant interface nvmf_tgt_if.IPv4 for mDNS. 00:23:16.738 Network interface enumeration completed. 00:23:16.738 Registering new address record for fe80::6084:d4ff:fe9b:2260 on nvmf_tgt_if2.*. 00:23:16.738 Registering new address record for 10.0.0.3 on nvmf_tgt_if2.IPv4. 00:23:16.738 Registering new address record for fe80::3c47:c4ff:feac:c7a5 on nvmf_tgt_if.*. 00:23:16.738 Registering new address record for 10.0.0.2 on nvmf_tgt_if.IPv4. 00:23:16.738 Server startup complete. Host name is fedora39-cloud-1721788873-2326.local. Local service cookie is 1242994566. 00:23:16.738 15:03:49 -- host/mdns_discovery.sh@60 -- # rpc_cmd -s /tmp/host.sock log_set_flag bdev_nvme 00:23:16.738 15:03:49 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:16.738 15:03:49 -- common/autotest_common.sh@10 -- # set +x 00:23:16.738 15:03:49 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:16.738 15:03:49 -- host/mdns_discovery.sh@61 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_mdns_discovery -b mdns -s _nvme-disc._tcp -q nqn.2021-12.io.spdk:test 00:23:16.738 15:03:49 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:16.738 15:03:49 -- common/autotest_common.sh@10 -- # set +x 00:23:16.738 15:03:49 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:16.738 15:03:49 -- host/mdns_discovery.sh@85 -- # notify_id=0 00:23:16.738 15:03:49 -- host/mdns_discovery.sh@91 -- # get_subsystem_names 00:23:16.738 15:03:49 -- host/mdns_discovery.sh@68 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:23:16.738 15:03:49 -- host/mdns_discovery.sh@68 -- # jq -r '.[].name' 00:23:16.738 15:03:49 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:16.738 15:03:49 -- common/autotest_common.sh@10 -- # set +x 00:23:16.738 15:03:49 -- host/mdns_discovery.sh@68 -- # sort 00:23:16.738 15:03:49 -- host/mdns_discovery.sh@68 -- # xargs 00:23:16.738 15:03:49 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:16.738 15:03:49 -- host/mdns_discovery.sh@91 -- # [[ '' == '' ]] 00:23:16.738 15:03:49 -- host/mdns_discovery.sh@92 -- # get_bdev_list 00:23:16.996 15:03:49 -- host/mdns_discovery.sh@64 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:16.996 15:03:49 -- host/mdns_discovery.sh@64 -- # jq -r '.[].name' 00:23:16.996 15:03:49 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:16.996 15:03:49 -- common/autotest_common.sh@10 -- # set +x 00:23:16.996 15:03:49 -- host/mdns_discovery.sh@64 -- # sort 00:23:16.996 15:03:49 -- host/mdns_discovery.sh@64 -- # xargs 00:23:16.996 15:03:49 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:16.996 15:03:49 -- host/mdns_discovery.sh@92 -- # [[ '' == '' ]] 00:23:16.996 15:03:49 -- host/mdns_discovery.sh@94 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 00:23:16.996 15:03:49 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:16.996 15:03:49 -- common/autotest_common.sh@10 -- # set +x 00:23:16.996 15:03:49 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:16.996 15:03:49 -- host/mdns_discovery.sh@95 -- # get_subsystem_names 00:23:16.996 15:03:49 -- host/mdns_discovery.sh@68 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:23:16.996 15:03:49 -- host/mdns_discovery.sh@68 -- # jq -r '.[].name' 00:23:16.996 15:03:49 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:16.996 15:03:49 -- host/mdns_discovery.sh@68 -- # sort 00:23:16.996 15:03:49 -- common/autotest_common.sh@10 -- # set +x 00:23:16.996 15:03:49 -- host/mdns_discovery.sh@68 -- # xargs 00:23:16.996 15:03:49 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:16.996 15:03:49 -- host/mdns_discovery.sh@95 -- # [[ '' == '' ]] 00:23:16.996 15:03:49 -- host/mdns_discovery.sh@96 -- # get_bdev_list 00:23:16.996 15:03:49 -- host/mdns_discovery.sh@64 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:16.996 15:03:49 -- host/mdns_discovery.sh@64 -- # jq -r '.[].name' 00:23:16.996 15:03:49 -- host/mdns_discovery.sh@64 -- # sort 00:23:16.996 15:03:49 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:16.996 15:03:49 -- common/autotest_common.sh@10 -- # set +x 00:23:16.996 15:03:49 -- host/mdns_discovery.sh@64 -- # xargs 00:23:16.996 15:03:49 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:16.996 15:03:50 -- host/mdns_discovery.sh@96 -- # [[ '' == '' ]] 00:23:16.996 15:03:50 -- host/mdns_discovery.sh@98 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 00:23:16.996 15:03:50 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:16.996 15:03:50 -- common/autotest_common.sh@10 -- # set +x 00:23:16.996 15:03:50 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:16.996 15:03:50 -- host/mdns_discovery.sh@99 -- # get_subsystem_names 00:23:16.996 15:03:50 -- host/mdns_discovery.sh@68 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:23:16.996 15:03:50 -- host/mdns_discovery.sh@68 -- # jq -r '.[].name' 00:23:16.996 15:03:50 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:16.996 15:03:50 -- host/mdns_discovery.sh@68 -- # sort 00:23:16.996 15:03:50 -- common/autotest_common.sh@10 -- # set +x 00:23:16.996 15:03:50 -- host/mdns_discovery.sh@68 -- # xargs 00:23:16.996 15:03:50 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:16.996 15:03:50 -- host/mdns_discovery.sh@99 -- # [[ '' == '' ]] 00:23:16.996 15:03:50 -- host/mdns_discovery.sh@100 -- # get_bdev_list 00:23:16.996 15:03:50 -- host/mdns_discovery.sh@64 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:16.996 15:03:50 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:16.996 15:03:50 -- common/autotest_common.sh@10 -- # set +x 00:23:16.996 15:03:50 -- host/mdns_discovery.sh@64 -- # jq -r '.[].name' 00:23:16.996 15:03:50 -- host/mdns_discovery.sh@64 -- # sort 00:23:16.996 15:03:50 -- host/mdns_discovery.sh@64 -- # xargs 00:23:16.996 15:03:50 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:16.997 [2024-12-01 15:03:50.100517] bdev_mdns_client.c: 395:mdns_browse_handler: *INFO*: (Browser) CACHE_EXHAUSTED 00:23:17.254 15:03:50 -- host/mdns_discovery.sh@100 -- # [[ '' == '' ]] 00:23:17.254 15:03:50 -- host/mdns_discovery.sh@104 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:23:17.254 15:03:50 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:17.254 15:03:50 -- common/autotest_common.sh@10 -- # set +x 00:23:17.254 [2024-12-01 15:03:50.145229] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:17.254 15:03:50 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:17.254 15:03:50 -- host/mdns_discovery.sh@108 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2021-12.io.spdk:test 00:23:17.254 15:03:50 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:17.255 15:03:50 -- common/autotest_common.sh@10 -- # set +x 00:23:17.255 15:03:50 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:17.255 15:03:50 -- host/mdns_discovery.sh@111 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode20 00:23:17.255 15:03:50 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:17.255 15:03:50 -- common/autotest_common.sh@10 -- # set +x 00:23:17.255 15:03:50 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:17.255 15:03:50 -- host/mdns_discovery.sh@112 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode20 null2 00:23:17.255 15:03:50 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:17.255 15:03:50 -- common/autotest_common.sh@10 -- # set +x 00:23:17.255 15:03:50 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:17.255 15:03:50 -- host/mdns_discovery.sh@116 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode20 nqn.2021-12.io.spdk:test 00:23:17.255 15:03:50 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:17.255 15:03:50 -- common/autotest_common.sh@10 -- # set +x 00:23:17.255 15:03:50 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:17.255 15:03:50 -- host/mdns_discovery.sh@118 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.3 -s 8009 00:23:17.255 15:03:50 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:17.255 15:03:50 -- common/autotest_common.sh@10 -- # set +x 00:23:17.255 [2024-12-01 15:03:50.185088] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 8009 *** 00:23:17.255 15:03:50 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:17.255 15:03:50 -- host/mdns_discovery.sh@120 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode20 -t tcp -a 10.0.0.3 -s 4420 00:23:17.255 15:03:50 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:17.255 15:03:50 -- common/autotest_common.sh@10 -- # set +x 00:23:17.255 [2024-12-01 15:03:50.193098] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:23:17.255 15:03:50 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:17.255 15:03:50 -- host/mdns_discovery.sh@124 -- # avahi_clientpid=98515 00:23:17.255 15:03:50 -- host/mdns_discovery.sh@123 -- # ip netns exec nvmf_tgt_ns_spdk /usr/bin/avahi-publish --domain=local --service CDC _nvme-disc._tcp 8009 NQN=nqn.2014-08.org.nvmexpress.discovery p=tcp 00:23:17.255 15:03:50 -- host/mdns_discovery.sh@125 -- # sleep 5 00:23:18.191 [2024-12-01 15:03:51.000521] bdev_mdns_client.c: 395:mdns_browse_handler: *INFO*: (Browser) ALL_FOR_NOW 00:23:18.191 Established under name 'CDC' 00:23:18.449 [2024-12-01 15:03:51.400548] bdev_mdns_client.c: 254:mdns_resolve_handler: *INFO*: Service 'CDC' of type '_nvme-disc._tcp' in domain 'local' 00:23:18.449 [2024-12-01 15:03:51.400573] bdev_mdns_client.c: 259:mdns_resolve_handler: *INFO*: fedora39-cloud-1721788873-2326.local:8009 (10.0.0.3) 00:23:18.449 TXT="p=tcp" "NQN=nqn.2014-08.org.nvmexpress.discovery" 00:23:18.449 cookie is 0 00:23:18.449 is_local: 1 00:23:18.449 our_own: 0 00:23:18.449 wide_area: 0 00:23:18.449 multicast: 1 00:23:18.449 cached: 1 00:23:18.449 [2024-12-01 15:03:51.500524] bdev_mdns_client.c: 254:mdns_resolve_handler: *INFO*: Service 'CDC' of type '_nvme-disc._tcp' in domain 'local' 00:23:18.449 [2024-12-01 15:03:51.500542] bdev_mdns_client.c: 259:mdns_resolve_handler: *INFO*: fedora39-cloud-1721788873-2326.local:8009 (10.0.0.2) 00:23:18.449 TXT="p=tcp" "NQN=nqn.2014-08.org.nvmexpress.discovery" 00:23:18.449 cookie is 0 00:23:18.449 is_local: 1 00:23:18.449 our_own: 0 00:23:18.449 wide_area: 0 00:23:18.449 multicast: 1 00:23:18.449 cached: 1 00:23:19.381 [2024-12-01 15:03:52.404522] bdev_nvme.c:6759:discovery_attach_cb: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr attached 00:23:19.381 [2024-12-01 15:03:52.404550] bdev_nvme.c:6839:discovery_poller: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr connected 00:23:19.381 [2024-12-01 15:03:52.404569] bdev_nvme.c:6722:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:23:19.381 [2024-12-01 15:03:52.490616] bdev_nvme.c:6688:discovery_log_page_cb: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4420 new subsystem mdns0_nvme0 00:23:19.639 [2024-12-01 15:03:52.504214] bdev_nvme.c:6759:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:23:19.639 [2024-12-01 15:03:52.504234] bdev_nvme.c:6839:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:23:19.639 [2024-12-01 15:03:52.504254] bdev_nvme.c:6722:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:23:19.639 [2024-12-01 15:03:52.547792] bdev_nvme.c:6578:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.3:8009] attach mdns0_nvme0 done 00:23:19.639 [2024-12-01 15:03:52.547815] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4420 found again 00:23:19.639 [2024-12-01 15:03:52.592006] bdev_nvme.c:6688:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem mdns1_nvme0 00:23:19.639 [2024-12-01 15:03:52.653566] bdev_nvme.c:6578:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach mdns1_nvme0 done 00:23:19.639 [2024-12-01 15:03:52.653590] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:23:22.174 15:03:55 -- host/mdns_discovery.sh@127 -- # get_mdns_discovery_svcs 00:23:22.174 15:03:55 -- host/mdns_discovery.sh@80 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_mdns_discovery_info 00:23:22.174 15:03:55 -- host/mdns_discovery.sh@80 -- # jq -r '.[].name' 00:23:22.174 15:03:55 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:22.174 15:03:55 -- host/mdns_discovery.sh@80 -- # sort 00:23:22.174 15:03:55 -- common/autotest_common.sh@10 -- # set +x 00:23:22.174 15:03:55 -- host/mdns_discovery.sh@80 -- # xargs 00:23:22.174 15:03:55 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:22.174 15:03:55 -- host/mdns_discovery.sh@127 -- # [[ mdns == \m\d\n\s ]] 00:23:22.174 15:03:55 -- host/mdns_discovery.sh@128 -- # get_discovery_ctrlrs 00:23:22.174 15:03:55 -- host/mdns_discovery.sh@76 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:23:22.174 15:03:55 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:22.174 15:03:55 -- common/autotest_common.sh@10 -- # set +x 00:23:22.174 15:03:55 -- host/mdns_discovery.sh@76 -- # jq -r '.[].name' 00:23:22.174 15:03:55 -- host/mdns_discovery.sh@76 -- # xargs 00:23:22.174 15:03:55 -- host/mdns_discovery.sh@76 -- # sort 00:23:22.174 15:03:55 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:22.442 15:03:55 -- host/mdns_discovery.sh@128 -- # [[ mdns0_nvme mdns1_nvme == \m\d\n\s\0\_\n\v\m\e\ \m\d\n\s\1\_\n\v\m\e ]] 00:23:22.442 15:03:55 -- host/mdns_discovery.sh@129 -- # get_subsystem_names 00:23:22.442 15:03:55 -- host/mdns_discovery.sh@68 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:23:22.442 15:03:55 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:22.442 15:03:55 -- common/autotest_common.sh@10 -- # set +x 00:23:22.442 15:03:55 -- host/mdns_discovery.sh@68 -- # jq -r '.[].name' 00:23:22.443 15:03:55 -- host/mdns_discovery.sh@68 -- # sort 00:23:22.443 15:03:55 -- host/mdns_discovery.sh@68 -- # xargs 00:23:22.443 15:03:55 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:22.443 15:03:55 -- host/mdns_discovery.sh@129 -- # [[ mdns0_nvme0 mdns1_nvme0 == \m\d\n\s\0\_\n\v\m\e\0\ \m\d\n\s\1\_\n\v\m\e\0 ]] 00:23:22.443 15:03:55 -- host/mdns_discovery.sh@130 -- # get_bdev_list 00:23:22.443 15:03:55 -- host/mdns_discovery.sh@64 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:22.443 15:03:55 -- host/mdns_discovery.sh@64 -- # jq -r '.[].name' 00:23:22.443 15:03:55 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:22.443 15:03:55 -- common/autotest_common.sh@10 -- # set +x 00:23:22.443 15:03:55 -- host/mdns_discovery.sh@64 -- # xargs 00:23:22.443 15:03:55 -- host/mdns_discovery.sh@64 -- # sort 00:23:22.443 15:03:55 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:22.443 15:03:55 -- host/mdns_discovery.sh@130 -- # [[ mdns0_nvme0n1 mdns1_nvme0n1 == \m\d\n\s\0\_\n\v\m\e\0\n\1\ \m\d\n\s\1\_\n\v\m\e\0\n\1 ]] 00:23:22.443 15:03:55 -- host/mdns_discovery.sh@131 -- # get_subsystem_paths mdns0_nvme0 00:23:22.443 15:03:55 -- host/mdns_discovery.sh@72 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n mdns0_nvme0 00:23:22.443 15:03:55 -- host/mdns_discovery.sh@72 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:23:22.443 15:03:55 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:22.443 15:03:55 -- common/autotest_common.sh@10 -- # set +x 00:23:22.443 15:03:55 -- host/mdns_discovery.sh@72 -- # sort -n 00:23:22.443 15:03:55 -- host/mdns_discovery.sh@72 -- # xargs 00:23:22.443 15:03:55 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:22.443 15:03:55 -- host/mdns_discovery.sh@131 -- # [[ 4420 == \4\4\2\0 ]] 00:23:22.443 15:03:55 -- host/mdns_discovery.sh@132 -- # get_subsystem_paths mdns1_nvme0 00:23:22.443 15:03:55 -- host/mdns_discovery.sh@72 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n mdns1_nvme0 00:23:22.443 15:03:55 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:22.443 15:03:55 -- host/mdns_discovery.sh@72 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:23:22.443 15:03:55 -- common/autotest_common.sh@10 -- # set +x 00:23:22.443 15:03:55 -- host/mdns_discovery.sh@72 -- # xargs 00:23:22.443 15:03:55 -- host/mdns_discovery.sh@72 -- # sort -n 00:23:22.443 15:03:55 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:22.443 15:03:55 -- host/mdns_discovery.sh@132 -- # [[ 4420 == \4\4\2\0 ]] 00:23:22.443 15:03:55 -- host/mdns_discovery.sh@133 -- # get_notification_count 00:23:22.443 15:03:55 -- host/mdns_discovery.sh@87 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:23:22.443 15:03:55 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:22.443 15:03:55 -- host/mdns_discovery.sh@87 -- # jq '. | length' 00:23:22.443 15:03:55 -- common/autotest_common.sh@10 -- # set +x 00:23:22.443 15:03:55 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:22.701 15:03:55 -- host/mdns_discovery.sh@87 -- # notification_count=2 00:23:22.701 15:03:55 -- host/mdns_discovery.sh@88 -- # notify_id=2 00:23:22.701 15:03:55 -- host/mdns_discovery.sh@134 -- # [[ 2 == 2 ]] 00:23:22.701 15:03:55 -- host/mdns_discovery.sh@137 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null1 00:23:22.701 15:03:55 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:22.701 15:03:55 -- common/autotest_common.sh@10 -- # set +x 00:23:22.701 15:03:55 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:22.701 15:03:55 -- host/mdns_discovery.sh@138 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode20 null3 00:23:22.701 15:03:55 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:22.701 15:03:55 -- common/autotest_common.sh@10 -- # set +x 00:23:22.701 15:03:55 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:22.701 15:03:55 -- host/mdns_discovery.sh@139 -- # sleep 1 00:23:23.635 15:03:56 -- host/mdns_discovery.sh@141 -- # get_bdev_list 00:23:23.635 15:03:56 -- host/mdns_discovery.sh@64 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:23.635 15:03:56 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:23.635 15:03:56 -- common/autotest_common.sh@10 -- # set +x 00:23:23.635 15:03:56 -- host/mdns_discovery.sh@64 -- # sort 00:23:23.635 15:03:56 -- host/mdns_discovery.sh@64 -- # jq -r '.[].name' 00:23:23.635 15:03:56 -- host/mdns_discovery.sh@64 -- # xargs 00:23:23.635 15:03:56 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:23.635 15:03:56 -- host/mdns_discovery.sh@141 -- # [[ mdns0_nvme0n1 mdns0_nvme0n2 mdns1_nvme0n1 mdns1_nvme0n2 == \m\d\n\s\0\_\n\v\m\e\0\n\1\ \m\d\n\s\0\_\n\v\m\e\0\n\2\ \m\d\n\s\1\_\n\v\m\e\0\n\1\ \m\d\n\s\1\_\n\v\m\e\0\n\2 ]] 00:23:23.635 15:03:56 -- host/mdns_discovery.sh@142 -- # get_notification_count 00:23:23.635 15:03:56 -- host/mdns_discovery.sh@87 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:23:23.635 15:03:56 -- host/mdns_discovery.sh@87 -- # jq '. | length' 00:23:23.635 15:03:56 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:23.635 15:03:56 -- common/autotest_common.sh@10 -- # set +x 00:23:23.635 15:03:56 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:23.635 15:03:56 -- host/mdns_discovery.sh@87 -- # notification_count=2 00:23:23.635 15:03:56 -- host/mdns_discovery.sh@88 -- # notify_id=4 00:23:23.635 15:03:56 -- host/mdns_discovery.sh@143 -- # [[ 2 == 2 ]] 00:23:23.635 15:03:56 -- host/mdns_discovery.sh@147 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 00:23:23.635 15:03:56 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:23.635 15:03:56 -- common/autotest_common.sh@10 -- # set +x 00:23:23.635 [2024-12-01 15:03:56.711694] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:23:23.635 [2024-12-01 15:03:56.712313] bdev_nvme.c:6741:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:23:23.635 [2024-12-01 15:03:56.712341] bdev_nvme.c:6722:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:23:23.635 [2024-12-01 15:03:56.712381] bdev_nvme.c:6741:discovery_aer_cb: *INFO*: Discovery[10.0.0.3:8009] got aer 00:23:23.635 [2024-12-01 15:03:56.712392] bdev_nvme.c:6722:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:23:23.635 15:03:56 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:23.635 15:03:56 -- host/mdns_discovery.sh@148 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode20 -t tcp -a 10.0.0.3 -s 4421 00:23:23.635 15:03:56 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:23.635 15:03:56 -- common/autotest_common.sh@10 -- # set +x 00:23:23.635 [2024-12-01 15:03:56.719589] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4421 *** 00:23:23.635 [2024-12-01 15:03:56.720326] bdev_nvme.c:6741:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:23:23.635 [2024-12-01 15:03:56.720386] bdev_nvme.c:6741:discovery_aer_cb: *INFO*: Discovery[10.0.0.3:8009] got aer 00:23:23.635 15:03:56 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:23.635 15:03:56 -- host/mdns_discovery.sh@149 -- # sleep 1 00:23:23.894 [2024-12-01 15:03:56.851408] bdev_nvme.c:6683:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new path for mdns1_nvme0 00:23:23.894 [2024-12-01 15:03:56.851541] bdev_nvme.c:6683:discovery_log_page_cb: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4421 new path for mdns0_nvme0 00:23:23.894 [2024-12-01 15:03:56.908587] bdev_nvme.c:6578:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach mdns1_nvme0 done 00:23:23.894 [2024-12-01 15:03:56.908608] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:23:23.894 [2024-12-01 15:03:56.908614] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:23:23.894 [2024-12-01 15:03:56.908629] bdev_nvme.c:6722:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:23:23.894 [2024-12-01 15:03:56.908687] bdev_nvme.c:6578:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.3:8009] attach mdns0_nvme0 done 00:23:23.894 [2024-12-01 15:03:56.908695] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4420 found again 00:23:23.894 [2024-12-01 15:03:56.908700] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4421 found again 00:23:23.894 [2024-12-01 15:03:56.908711] bdev_nvme.c:6722:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:23:23.894 [2024-12-01 15:03:56.954488] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:23:23.894 [2024-12-01 15:03:56.954506] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:23:23.894 [2024-12-01 15:03:56.954541] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4420 found again 00:23:23.894 [2024-12-01 15:03:56.954548] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4421 found again 00:23:24.827 15:03:57 -- host/mdns_discovery.sh@151 -- # get_subsystem_names 00:23:24.827 15:03:57 -- host/mdns_discovery.sh@68 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:23:24.827 15:03:57 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:24.827 15:03:57 -- host/mdns_discovery.sh@68 -- # sort 00:23:24.827 15:03:57 -- host/mdns_discovery.sh@68 -- # jq -r '.[].name' 00:23:24.827 15:03:57 -- common/autotest_common.sh@10 -- # set +x 00:23:24.827 15:03:57 -- host/mdns_discovery.sh@68 -- # xargs 00:23:24.827 15:03:57 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:24.827 15:03:57 -- host/mdns_discovery.sh@151 -- # [[ mdns0_nvme0 mdns1_nvme0 == \m\d\n\s\0\_\n\v\m\e\0\ \m\d\n\s\1\_\n\v\m\e\0 ]] 00:23:24.827 15:03:57 -- host/mdns_discovery.sh@152 -- # get_bdev_list 00:23:24.827 15:03:57 -- host/mdns_discovery.sh@64 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:24.827 15:03:57 -- host/mdns_discovery.sh@64 -- # jq -r '.[].name' 00:23:24.827 15:03:57 -- host/mdns_discovery.sh@64 -- # sort 00:23:24.827 15:03:57 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:24.827 15:03:57 -- common/autotest_common.sh@10 -- # set +x 00:23:24.827 15:03:57 -- host/mdns_discovery.sh@64 -- # xargs 00:23:24.827 15:03:57 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:24.827 15:03:57 -- host/mdns_discovery.sh@152 -- # [[ mdns0_nvme0n1 mdns0_nvme0n2 mdns1_nvme0n1 mdns1_nvme0n2 == \m\d\n\s\0\_\n\v\m\e\0\n\1\ \m\d\n\s\0\_\n\v\m\e\0\n\2\ \m\d\n\s\1\_\n\v\m\e\0\n\1\ \m\d\n\s\1\_\n\v\m\e\0\n\2 ]] 00:23:24.827 15:03:57 -- host/mdns_discovery.sh@153 -- # get_subsystem_paths mdns0_nvme0 00:23:24.827 15:03:57 -- host/mdns_discovery.sh@72 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n mdns0_nvme0 00:23:24.827 15:03:57 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:24.827 15:03:57 -- common/autotest_common.sh@10 -- # set +x 00:23:24.827 15:03:57 -- host/mdns_discovery.sh@72 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:23:24.827 15:03:57 -- host/mdns_discovery.sh@72 -- # sort -n 00:23:24.827 15:03:57 -- host/mdns_discovery.sh@72 -- # xargs 00:23:24.827 15:03:57 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:24.827 15:03:57 -- host/mdns_discovery.sh@153 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:23:24.827 15:03:57 -- host/mdns_discovery.sh@154 -- # get_subsystem_paths mdns1_nvme0 00:23:24.827 15:03:57 -- host/mdns_discovery.sh@72 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n mdns1_nvme0 00:23:24.827 15:03:57 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:24.827 15:03:57 -- host/mdns_discovery.sh@72 -- # sort -n 00:23:24.827 15:03:57 -- common/autotest_common.sh@10 -- # set +x 00:23:24.827 15:03:57 -- host/mdns_discovery.sh@72 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:23:24.827 15:03:57 -- host/mdns_discovery.sh@72 -- # xargs 00:23:24.827 15:03:57 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:25.089 15:03:57 -- host/mdns_discovery.sh@154 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:23:25.089 15:03:57 -- host/mdns_discovery.sh@155 -- # get_notification_count 00:23:25.089 15:03:57 -- host/mdns_discovery.sh@87 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 4 00:23:25.089 15:03:57 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:25.089 15:03:57 -- common/autotest_common.sh@10 -- # set +x 00:23:25.089 15:03:57 -- host/mdns_discovery.sh@87 -- # jq '. | length' 00:23:25.089 15:03:57 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:25.089 15:03:58 -- host/mdns_discovery.sh@87 -- # notification_count=0 00:23:25.089 15:03:58 -- host/mdns_discovery.sh@88 -- # notify_id=4 00:23:25.089 15:03:58 -- host/mdns_discovery.sh@156 -- # [[ 0 == 0 ]] 00:23:25.089 15:03:58 -- host/mdns_discovery.sh@160 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:23:25.089 15:03:58 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:25.089 15:03:58 -- common/autotest_common.sh@10 -- # set +x 00:23:25.089 [2024-12-01 15:03:58.024395] bdev_nvme.c:6741:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:23:25.089 [2024-12-01 15:03:58.024424] bdev_nvme.c:6722:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:23:25.089 [2024-12-01 15:03:58.024453] bdev_nvme.c:6741:discovery_aer_cb: *INFO*: Discovery[10.0.0.3:8009] got aer 00:23:25.089 [2024-12-01 15:03:58.024464] bdev_nvme.c:6722:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:23:25.089 15:03:58 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:25.089 15:03:58 -- host/mdns_discovery.sh@161 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode20 -t tcp -a 10.0.0.3 -s 4420 00:23:25.089 15:03:58 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:25.089 15:03:58 -- common/autotest_common.sh@10 -- # set +x 00:23:25.089 [2024-12-01 15:03:58.032421] bdev_nvme.c:6741:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:23:25.089 [2024-12-01 15:03:58.032480] bdev_nvme.c:6741:discovery_aer_cb: *INFO*: Discovery[10.0.0.3:8009] got aer 00:23:25.089 [2024-12-01 15:03:58.033516] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:25.089 [2024-12-01 15:03:58.033559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:25.089 [2024-12-01 15:03:58.033570] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:25.089 [2024-12-01 15:03:58.033578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:25.089 [2024-12-01 15:03:58.033587] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:25.089 [2024-12-01 15:03:58.033594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:25.089 [2024-12-01 15:03:58.033602] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:25.090 [2024-12-01 15:03:58.033610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:25.090 [2024-12-01 15:03:58.033617] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x746760 is same with the state(5) to be set 00:23:25.090 [2024-12-01 15:03:58.036461] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:25.090 [2024-12-01 15:03:58.036494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:25.090 [2024-12-01 15:03:58.036504] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:25.090 [2024-12-01 15:03:58.036512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:25.090 [2024-12-01 15:03:58.036520] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:25.090 [2024-12-01 15:03:58.036528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:25.090 [2024-12-01 15:03:58.036536] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:25.090 [2024-12-01 15:03:58.036543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:25.090 [2024-12-01 15:03:58.036550] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7489a0 is same with the state(5) to be set 00:23:25.090 15:03:58 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:25.090 15:03:58 -- host/mdns_discovery.sh@162 -- # sleep 1 00:23:25.090 [2024-12-01 15:03:58.043464] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x746760 (9): Bad file descriptor 00:23:25.090 [2024-12-01 15:03:58.046434] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7489a0 (9): Bad file descriptor 00:23:25.090 [2024-12-01 15:03:58.053494] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:23:25.090 [2024-12-01 15:03:58.053579] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:25.090 [2024-12-01 15:03:58.053621] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:25.090 [2024-12-01 15:03:58.053635] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x746760 with addr=10.0.0.2, port=4420 00:23:25.090 [2024-12-01 15:03:58.053645] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x746760 is same with the state(5) to be set 00:23:25.090 [2024-12-01 15:03:58.053660] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x746760 (9): Bad file descriptor 00:23:25.090 [2024-12-01 15:03:58.053672] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:23:25.090 [2024-12-01 15:03:58.053680] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:23:25.090 [2024-12-01 15:03:58.053689] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:23:25.090 [2024-12-01 15:03:58.053703] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:25.090 [2024-12-01 15:03:58.056444] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:23:25.090 [2024-12-01 15:03:58.056512] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:25.090 [2024-12-01 15:03:58.056552] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:25.090 [2024-12-01 15:03:58.056567] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7489a0 with addr=10.0.0.3, port=4420 00:23:25.090 [2024-12-01 15:03:58.056576] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7489a0 is same with the state(5) to be set 00:23:25.090 [2024-12-01 15:03:58.056590] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7489a0 (9): Bad file descriptor 00:23:25.090 [2024-12-01 15:03:58.056613] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:23:25.090 [2024-12-01 15:03:58.056622] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:23:25.090 [2024-12-01 15:03:58.056629] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:23:25.090 [2024-12-01 15:03:58.056642] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:25.090 [2024-12-01 15:03:58.063542] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:23:25.090 [2024-12-01 15:03:58.063610] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:25.090 [2024-12-01 15:03:58.063650] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:25.090 [2024-12-01 15:03:58.063664] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x746760 with addr=10.0.0.2, port=4420 00:23:25.090 [2024-12-01 15:03:58.063672] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x746760 is same with the state(5) to be set 00:23:25.090 [2024-12-01 15:03:58.063686] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x746760 (9): Bad file descriptor 00:23:25.090 [2024-12-01 15:03:58.063698] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:23:25.090 [2024-12-01 15:03:58.063706] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:23:25.090 [2024-12-01 15:03:58.063713] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:23:25.090 [2024-12-01 15:03:58.063726] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:25.090 [2024-12-01 15:03:58.066487] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:23:25.090 [2024-12-01 15:03:58.066569] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:25.090 [2024-12-01 15:03:58.066610] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:25.090 [2024-12-01 15:03:58.066623] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7489a0 with addr=10.0.0.3, port=4420 00:23:25.090 [2024-12-01 15:03:58.066632] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7489a0 is same with the state(5) to be set 00:23:25.090 [2024-12-01 15:03:58.066645] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7489a0 (9): Bad file descriptor 00:23:25.090 [2024-12-01 15:03:58.066657] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:23:25.090 [2024-12-01 15:03:58.066665] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:23:25.090 [2024-12-01 15:03:58.066674] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:23:25.090 [2024-12-01 15:03:58.066686] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:25.090 [2024-12-01 15:03:58.073585] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:23:25.090 [2024-12-01 15:03:58.073653] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:25.090 [2024-12-01 15:03:58.073692] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:25.090 [2024-12-01 15:03:58.073705] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x746760 with addr=10.0.0.2, port=4420 00:23:25.090 [2024-12-01 15:03:58.073714] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x746760 is same with the state(5) to be set 00:23:25.090 [2024-12-01 15:03:58.073728] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x746760 (9): Bad file descriptor 00:23:25.090 [2024-12-01 15:03:58.073740] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:23:25.090 [2024-12-01 15:03:58.073748] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:23:25.090 [2024-12-01 15:03:58.073768] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:23:25.090 [2024-12-01 15:03:58.073781] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:25.090 [2024-12-01 15:03:58.076543] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:23:25.090 [2024-12-01 15:03:58.076607] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:25.090 [2024-12-01 15:03:58.076647] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:25.090 [2024-12-01 15:03:58.076661] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7489a0 with addr=10.0.0.3, port=4420 00:23:25.090 [2024-12-01 15:03:58.076672] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7489a0 is same with the state(5) to be set 00:23:25.090 [2024-12-01 15:03:58.076686] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7489a0 (9): Bad file descriptor 00:23:25.090 [2024-12-01 15:03:58.076708] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:23:25.090 [2024-12-01 15:03:58.076718] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:23:25.090 [2024-12-01 15:03:58.076725] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:23:25.090 [2024-12-01 15:03:58.076737] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:25.090 [2024-12-01 15:03:58.083628] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:23:25.090 [2024-12-01 15:03:58.083703] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:25.090 [2024-12-01 15:03:58.083745] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:25.090 [2024-12-01 15:03:58.083770] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x746760 with addr=10.0.0.2, port=4420 00:23:25.090 [2024-12-01 15:03:58.083781] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x746760 is same with the state(5) to be set 00:23:25.090 [2024-12-01 15:03:58.083796] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x746760 (9): Bad file descriptor 00:23:25.090 [2024-12-01 15:03:58.083808] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:23:25.090 [2024-12-01 15:03:58.083816] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:23:25.090 [2024-12-01 15:03:58.083824] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:23:25.090 [2024-12-01 15:03:58.083837] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:25.090 [2024-12-01 15:03:58.086584] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:23:25.090 [2024-12-01 15:03:58.086654] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:25.090 [2024-12-01 15:03:58.086693] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:25.090 [2024-12-01 15:03:58.086707] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7489a0 with addr=10.0.0.3, port=4420 00:23:25.090 [2024-12-01 15:03:58.086717] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7489a0 is same with the state(5) to be set 00:23:25.090 [2024-12-01 15:03:58.086731] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7489a0 (9): Bad file descriptor 00:23:25.090 [2024-12-01 15:03:58.086743] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:23:25.090 [2024-12-01 15:03:58.086761] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:23:25.091 [2024-12-01 15:03:58.086770] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:23:25.091 [2024-12-01 15:03:58.086783] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:25.091 [2024-12-01 15:03:58.093674] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:23:25.091 [2024-12-01 15:03:58.093746] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:25.091 [2024-12-01 15:03:58.093818] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:25.091 [2024-12-01 15:03:58.093833] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x746760 with addr=10.0.0.2, port=4420 00:23:25.091 [2024-12-01 15:03:58.093841] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x746760 is same with the state(5) to be set 00:23:25.091 [2024-12-01 15:03:58.093857] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x746760 (9): Bad file descriptor 00:23:25.091 [2024-12-01 15:03:58.093869] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:23:25.091 [2024-12-01 15:03:58.093877] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:23:25.091 [2024-12-01 15:03:58.093885] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:23:25.091 [2024-12-01 15:03:58.093897] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:25.091 [2024-12-01 15:03:58.096627] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:23:25.091 [2024-12-01 15:03:58.096693] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:25.091 [2024-12-01 15:03:58.096732] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:25.091 [2024-12-01 15:03:58.096746] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7489a0 with addr=10.0.0.3, port=4420 00:23:25.091 [2024-12-01 15:03:58.096766] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7489a0 is same with the state(5) to be set 00:23:25.091 [2024-12-01 15:03:58.096780] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7489a0 (9): Bad file descriptor 00:23:25.091 [2024-12-01 15:03:58.096803] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:23:25.091 [2024-12-01 15:03:58.096813] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:23:25.091 [2024-12-01 15:03:58.096820] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:23:25.091 [2024-12-01 15:03:58.096833] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:25.091 [2024-12-01 15:03:58.103718] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:23:25.091 [2024-12-01 15:03:58.103796] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:25.091 [2024-12-01 15:03:58.103835] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:25.091 [2024-12-01 15:03:58.103849] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x746760 with addr=10.0.0.2, port=4420 00:23:25.091 [2024-12-01 15:03:58.103858] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x746760 is same with the state(5) to be set 00:23:25.091 [2024-12-01 15:03:58.103872] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x746760 (9): Bad file descriptor 00:23:25.091 [2024-12-01 15:03:58.103884] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:23:25.091 [2024-12-01 15:03:58.103892] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:23:25.091 [2024-12-01 15:03:58.103900] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:23:25.091 [2024-12-01 15:03:58.103912] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:25.091 [2024-12-01 15:03:58.106669] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:23:25.091 [2024-12-01 15:03:58.106747] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:25.091 [2024-12-01 15:03:58.106799] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:25.091 [2024-12-01 15:03:58.106813] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7489a0 with addr=10.0.0.3, port=4420 00:23:25.091 [2024-12-01 15:03:58.106822] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7489a0 is same with the state(5) to be set 00:23:25.091 [2024-12-01 15:03:58.106836] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7489a0 (9): Bad file descriptor 00:23:25.091 [2024-12-01 15:03:58.106848] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:23:25.091 [2024-12-01 15:03:58.106856] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:23:25.091 [2024-12-01 15:03:58.106864] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:23:25.091 [2024-12-01 15:03:58.106877] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:25.091 [2024-12-01 15:03:58.113776] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:23:25.091 [2024-12-01 15:03:58.113853] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:25.091 [2024-12-01 15:03:58.113892] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:25.091 [2024-12-01 15:03:58.113907] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x746760 with addr=10.0.0.2, port=4420 00:23:25.091 [2024-12-01 15:03:58.113916] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x746760 is same with the state(5) to be set 00:23:25.091 [2024-12-01 15:03:58.113930] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x746760 (9): Bad file descriptor 00:23:25.091 [2024-12-01 15:03:58.113942] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:23:25.091 [2024-12-01 15:03:58.113949] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:23:25.091 [2024-12-01 15:03:58.113957] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:23:25.091 [2024-12-01 15:03:58.113969] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:25.091 [2024-12-01 15:03:58.116721] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:23:25.091 [2024-12-01 15:03:58.116806] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:25.091 [2024-12-01 15:03:58.116846] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:25.091 [2024-12-01 15:03:58.116860] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7489a0 with addr=10.0.0.3, port=4420 00:23:25.091 [2024-12-01 15:03:58.116869] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7489a0 is same with the state(5) to be set 00:23:25.091 [2024-12-01 15:03:58.116883] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7489a0 (9): Bad file descriptor 00:23:25.091 [2024-12-01 15:03:58.116904] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:23:25.091 [2024-12-01 15:03:58.116914] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:23:25.091 [2024-12-01 15:03:58.116921] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:23:25.091 [2024-12-01 15:03:58.116934] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:25.091 [2024-12-01 15:03:58.123832] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:23:25.091 [2024-12-01 15:03:58.123908] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:25.091 [2024-12-01 15:03:58.123949] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:25.091 [2024-12-01 15:03:58.123963] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x746760 with addr=10.0.0.2, port=4420 00:23:25.091 [2024-12-01 15:03:58.123972] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x746760 is same with the state(5) to be set 00:23:25.091 [2024-12-01 15:03:58.123986] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x746760 (9): Bad file descriptor 00:23:25.091 [2024-12-01 15:03:58.123999] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:23:25.091 [2024-12-01 15:03:58.124008] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:23:25.091 [2024-12-01 15:03:58.124015] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:23:25.091 [2024-12-01 15:03:58.124028] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:25.091 [2024-12-01 15:03:58.126772] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:23:25.091 [2024-12-01 15:03:58.126852] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:25.091 [2024-12-01 15:03:58.126892] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:25.091 [2024-12-01 15:03:58.126906] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7489a0 with addr=10.0.0.3, port=4420 00:23:25.091 [2024-12-01 15:03:58.126915] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7489a0 is same with the state(5) to be set 00:23:25.091 [2024-12-01 15:03:58.126928] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7489a0 (9): Bad file descriptor 00:23:25.091 [2024-12-01 15:03:58.126942] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:23:25.091 [2024-12-01 15:03:58.126950] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:23:25.091 [2024-12-01 15:03:58.126957] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:23:25.091 [2024-12-01 15:03:58.126970] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:25.091 [2024-12-01 15:03:58.133878] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:23:25.091 [2024-12-01 15:03:58.133946] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:25.091 [2024-12-01 15:03:58.133985] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:25.091 [2024-12-01 15:03:58.133998] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x746760 with addr=10.0.0.2, port=4420 00:23:25.091 [2024-12-01 15:03:58.134008] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x746760 is same with the state(5) to be set 00:23:25.091 [2024-12-01 15:03:58.134021] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x746760 (9): Bad file descriptor 00:23:25.091 [2024-12-01 15:03:58.134034] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:23:25.091 [2024-12-01 15:03:58.134042] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:23:25.091 [2024-12-01 15:03:58.134050] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:23:25.091 [2024-12-01 15:03:58.134062] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:25.091 [2024-12-01 15:03:58.136826] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:23:25.092 [2024-12-01 15:03:58.136890] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:25.092 [2024-12-01 15:03:58.136929] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:25.092 [2024-12-01 15:03:58.136943] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7489a0 with addr=10.0.0.3, port=4420 00:23:25.092 [2024-12-01 15:03:58.136952] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7489a0 is same with the state(5) to be set 00:23:25.092 [2024-12-01 15:03:58.136966] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7489a0 (9): Bad file descriptor 00:23:25.092 [2024-12-01 15:03:58.136988] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:23:25.092 [2024-12-01 15:03:58.136997] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:23:25.092 [2024-12-01 15:03:58.137005] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:23:25.092 [2024-12-01 15:03:58.137017] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:25.092 [2024-12-01 15:03:58.143921] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:23:25.092 [2024-12-01 15:03:58.143988] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:25.092 [2024-12-01 15:03:58.144027] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:25.092 [2024-12-01 15:03:58.144041] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x746760 with addr=10.0.0.2, port=4420 00:23:25.092 [2024-12-01 15:03:58.144050] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x746760 is same with the state(5) to be set 00:23:25.092 [2024-12-01 15:03:58.144064] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x746760 (9): Bad file descriptor 00:23:25.092 [2024-12-01 15:03:58.144076] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:23:25.092 [2024-12-01 15:03:58.144084] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:23:25.092 [2024-12-01 15:03:58.144092] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:23:25.092 [2024-12-01 15:03:58.144104] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:25.092 [2024-12-01 15:03:58.146866] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:23:25.092 [2024-12-01 15:03:58.146934] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:25.092 [2024-12-01 15:03:58.146973] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:25.092 [2024-12-01 15:03:58.146987] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7489a0 with addr=10.0.0.3, port=4420 00:23:25.092 [2024-12-01 15:03:58.146996] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7489a0 is same with the state(5) to be set 00:23:25.092 [2024-12-01 15:03:58.147010] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7489a0 (9): Bad file descriptor 00:23:25.092 [2024-12-01 15:03:58.147023] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:23:25.092 [2024-12-01 15:03:58.147030] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:23:25.092 [2024-12-01 15:03:58.147038] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:23:25.092 [2024-12-01 15:03:58.147050] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:25.092 [2024-12-01 15:03:58.153963] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:23:25.092 [2024-12-01 15:03:58.154030] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:25.092 [2024-12-01 15:03:58.154069] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:25.092 [2024-12-01 15:03:58.154082] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x746760 with addr=10.0.0.2, port=4420 00:23:25.092 [2024-12-01 15:03:58.154091] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x746760 is same with the state(5) to be set 00:23:25.092 [2024-12-01 15:03:58.154104] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x746760 (9): Bad file descriptor 00:23:25.092 [2024-12-01 15:03:58.154116] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:23:25.092 [2024-12-01 15:03:58.154124] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:23:25.092 [2024-12-01 15:03:58.154132] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:23:25.092 [2024-12-01 15:03:58.154145] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:25.092 [2024-12-01 15:03:58.156909] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:23:25.092 [2024-12-01 15:03:58.156974] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:25.092 [2024-12-01 15:03:58.157013] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:25.092 [2024-12-01 15:03:58.157026] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7489a0 with addr=10.0.0.3, port=4420 00:23:25.092 [2024-12-01 15:03:58.157036] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7489a0 is same with the state(5) to be set 00:23:25.092 [2024-12-01 15:03:58.157049] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7489a0 (9): Bad file descriptor 00:23:25.092 [2024-12-01 15:03:58.157071] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:23:25.092 [2024-12-01 15:03:58.157080] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:23:25.092 [2024-12-01 15:03:58.157088] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:23:25.092 [2024-12-01 15:03:58.157100] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:25.092 [2024-12-01 15:03:58.163483] bdev_nvme.c:6546:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 not found 00:23:25.092 [2024-12-01 15:03:58.163507] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:23:25.092 [2024-12-01 15:03:58.163524] bdev_nvme.c:6722:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:23:25.092 [2024-12-01 15:03:58.163553] bdev_nvme.c:6546:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4420 not found 00:23:25.092 [2024-12-01 15:03:58.163566] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4421 found again 00:23:25.092 [2024-12-01 15:03:58.163577] bdev_nvme.c:6722:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:23:25.350 [2024-12-01 15:03:58.249572] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:23:25.350 [2024-12-01 15:03:58.249631] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4421 found again 00:23:26.285 15:03:59 -- host/mdns_discovery.sh@164 -- # get_subsystem_names 00:23:26.285 15:03:59 -- host/mdns_discovery.sh@68 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:23:26.285 15:03:59 -- host/mdns_discovery.sh@68 -- # jq -r '.[].name' 00:23:26.285 15:03:59 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:26.285 15:03:59 -- common/autotest_common.sh@10 -- # set +x 00:23:26.285 15:03:59 -- host/mdns_discovery.sh@68 -- # xargs 00:23:26.285 15:03:59 -- host/mdns_discovery.sh@68 -- # sort 00:23:26.285 15:03:59 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:26.285 15:03:59 -- host/mdns_discovery.sh@164 -- # [[ mdns0_nvme0 mdns1_nvme0 == \m\d\n\s\0\_\n\v\m\e\0\ \m\d\n\s\1\_\n\v\m\e\0 ]] 00:23:26.285 15:03:59 -- host/mdns_discovery.sh@165 -- # get_bdev_list 00:23:26.286 15:03:59 -- host/mdns_discovery.sh@64 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:26.286 15:03:59 -- host/mdns_discovery.sh@64 -- # jq -r '.[].name' 00:23:26.286 15:03:59 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:26.286 15:03:59 -- common/autotest_common.sh@10 -- # set +x 00:23:26.286 15:03:59 -- host/mdns_discovery.sh@64 -- # sort 00:23:26.286 15:03:59 -- host/mdns_discovery.sh@64 -- # xargs 00:23:26.286 15:03:59 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:26.286 15:03:59 -- host/mdns_discovery.sh@165 -- # [[ mdns0_nvme0n1 mdns0_nvme0n2 mdns1_nvme0n1 mdns1_nvme0n2 == \m\d\n\s\0\_\n\v\m\e\0\n\1\ \m\d\n\s\0\_\n\v\m\e\0\n\2\ \m\d\n\s\1\_\n\v\m\e\0\n\1\ \m\d\n\s\1\_\n\v\m\e\0\n\2 ]] 00:23:26.286 15:03:59 -- host/mdns_discovery.sh@166 -- # get_subsystem_paths mdns0_nvme0 00:23:26.286 15:03:59 -- host/mdns_discovery.sh@72 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:23:26.286 15:03:59 -- host/mdns_discovery.sh@72 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n mdns0_nvme0 00:23:26.286 15:03:59 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:26.286 15:03:59 -- host/mdns_discovery.sh@72 -- # sort -n 00:23:26.286 15:03:59 -- host/mdns_discovery.sh@72 -- # xargs 00:23:26.286 15:03:59 -- common/autotest_common.sh@10 -- # set +x 00:23:26.286 15:03:59 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:26.286 15:03:59 -- host/mdns_discovery.sh@166 -- # [[ 4421 == \4\4\2\1 ]] 00:23:26.286 15:03:59 -- host/mdns_discovery.sh@167 -- # get_subsystem_paths mdns1_nvme0 00:23:26.286 15:03:59 -- host/mdns_discovery.sh@72 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:23:26.286 15:03:59 -- host/mdns_discovery.sh@72 -- # sort -n 00:23:26.286 15:03:59 -- host/mdns_discovery.sh@72 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n mdns1_nvme0 00:23:26.286 15:03:59 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:26.286 15:03:59 -- common/autotest_common.sh@10 -- # set +x 00:23:26.286 15:03:59 -- host/mdns_discovery.sh@72 -- # xargs 00:23:26.286 15:03:59 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:26.286 15:03:59 -- host/mdns_discovery.sh@167 -- # [[ 4421 == \4\4\2\1 ]] 00:23:26.286 15:03:59 -- host/mdns_discovery.sh@168 -- # get_notification_count 00:23:26.286 15:03:59 -- host/mdns_discovery.sh@87 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 4 00:23:26.286 15:03:59 -- host/mdns_discovery.sh@87 -- # jq '. | length' 00:23:26.286 15:03:59 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:26.286 15:03:59 -- common/autotest_common.sh@10 -- # set +x 00:23:26.286 15:03:59 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:26.286 15:03:59 -- host/mdns_discovery.sh@87 -- # notification_count=0 00:23:26.286 15:03:59 -- host/mdns_discovery.sh@88 -- # notify_id=4 00:23:26.286 15:03:59 -- host/mdns_discovery.sh@169 -- # [[ 0 == 0 ]] 00:23:26.286 15:03:59 -- host/mdns_discovery.sh@171 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_mdns_discovery -b mdns 00:23:26.286 15:03:59 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:26.286 15:03:59 -- common/autotest_common.sh@10 -- # set +x 00:23:26.286 15:03:59 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:26.286 15:03:59 -- host/mdns_discovery.sh@172 -- # sleep 1 00:23:26.544 [2024-12-01 15:03:59.400532] bdev_mdns_client.c: 424:bdev_nvme_avahi_iterate: *INFO*: Stopping avahi poller for service _nvme-disc._tcp 00:23:27.477 15:04:00 -- host/mdns_discovery.sh@174 -- # get_mdns_discovery_svcs 00:23:27.477 15:04:00 -- host/mdns_discovery.sh@80 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_mdns_discovery_info 00:23:27.477 15:04:00 -- host/mdns_discovery.sh@80 -- # jq -r '.[].name' 00:23:27.477 15:04:00 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:27.477 15:04:00 -- common/autotest_common.sh@10 -- # set +x 00:23:27.477 15:04:00 -- host/mdns_discovery.sh@80 -- # xargs 00:23:27.477 15:04:00 -- host/mdns_discovery.sh@80 -- # sort 00:23:27.477 15:04:00 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:27.477 15:04:00 -- host/mdns_discovery.sh@174 -- # [[ '' == '' ]] 00:23:27.477 15:04:00 -- host/mdns_discovery.sh@175 -- # get_subsystem_names 00:23:27.477 15:04:00 -- host/mdns_discovery.sh@68 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:23:27.477 15:04:00 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:27.477 15:04:00 -- common/autotest_common.sh@10 -- # set +x 00:23:27.477 15:04:00 -- host/mdns_discovery.sh@68 -- # sort 00:23:27.477 15:04:00 -- host/mdns_discovery.sh@68 -- # jq -r '.[].name' 00:23:27.477 15:04:00 -- host/mdns_discovery.sh@68 -- # xargs 00:23:27.477 15:04:00 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:27.477 15:04:00 -- host/mdns_discovery.sh@175 -- # [[ '' == '' ]] 00:23:27.477 15:04:00 -- host/mdns_discovery.sh@176 -- # get_bdev_list 00:23:27.477 15:04:00 -- host/mdns_discovery.sh@64 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:27.477 15:04:00 -- host/mdns_discovery.sh@64 -- # jq -r '.[].name' 00:23:27.477 15:04:00 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:27.477 15:04:00 -- host/mdns_discovery.sh@64 -- # xargs 00:23:27.477 15:04:00 -- common/autotest_common.sh@10 -- # set +x 00:23:27.477 15:04:00 -- host/mdns_discovery.sh@64 -- # sort 00:23:27.477 15:04:00 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:27.477 15:04:00 -- host/mdns_discovery.sh@176 -- # [[ '' == '' ]] 00:23:27.477 15:04:00 -- host/mdns_discovery.sh@177 -- # get_notification_count 00:23:27.477 15:04:00 -- host/mdns_discovery.sh@87 -- # jq '. | length' 00:23:27.477 15:04:00 -- host/mdns_discovery.sh@87 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 4 00:23:27.477 15:04:00 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:27.477 15:04:00 -- common/autotest_common.sh@10 -- # set +x 00:23:27.477 15:04:00 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:27.477 15:04:00 -- host/mdns_discovery.sh@87 -- # notification_count=4 00:23:27.477 15:04:00 -- host/mdns_discovery.sh@88 -- # notify_id=8 00:23:27.477 15:04:00 -- host/mdns_discovery.sh@178 -- # [[ 4 == 4 ]] 00:23:27.477 15:04:00 -- host/mdns_discovery.sh@181 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_mdns_discovery -b mdns -s _nvme-disc._tcp -q nqn.2021-12.io.spdk:test 00:23:27.477 15:04:00 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:27.477 15:04:00 -- common/autotest_common.sh@10 -- # set +x 00:23:27.477 15:04:00 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:27.477 15:04:00 -- host/mdns_discovery.sh@182 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_mdns_discovery -b mdns -s _nvme-disc._http -q nqn.2021-12.io.spdk:test 00:23:27.477 15:04:00 -- common/autotest_common.sh@650 -- # local es=0 00:23:27.477 15:04:00 -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_mdns_discovery -b mdns -s _nvme-disc._http -q nqn.2021-12.io.spdk:test 00:23:27.477 15:04:00 -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:23:27.477 15:04:00 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:27.477 15:04:00 -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:23:27.477 15:04:00 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:27.477 15:04:00 -- common/autotest_common.sh@653 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_mdns_discovery -b mdns -s _nvme-disc._http -q nqn.2021-12.io.spdk:test 00:23:27.477 15:04:00 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:27.477 15:04:00 -- common/autotest_common.sh@10 -- # set +x 00:23:27.477 [2024-12-01 15:04:00.533743] bdev_mdns_client.c: 470:bdev_nvme_start_mdns_discovery: *ERROR*: mDNS discovery already running with name mdns 00:23:27.477 2024/12/01 15:04:00 error on JSON-RPC call, method: bdev_nvme_start_mdns_discovery, params: map[hostnqn:nqn.2021-12.io.spdk:test name:mdns svcname:_nvme-disc._http], err: error received for bdev_nvme_start_mdns_discovery method, err: Code=-17 Msg=File exists 00:23:27.477 request: 00:23:27.477 { 00:23:27.477 "method": "bdev_nvme_start_mdns_discovery", 00:23:27.477 "params": { 00:23:27.477 "name": "mdns", 00:23:27.477 "svcname": "_nvme-disc._http", 00:23:27.477 "hostnqn": "nqn.2021-12.io.spdk:test" 00:23:27.477 } 00:23:27.477 } 00:23:27.477 Got JSON-RPC error response 00:23:27.477 GoRPCClient: error on JSON-RPC call 00:23:27.477 15:04:00 -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:23:27.477 15:04:00 -- common/autotest_common.sh@653 -- # es=1 00:23:27.477 15:04:00 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:23:27.477 15:04:00 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:23:27.477 15:04:00 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:23:27.477 15:04:00 -- host/mdns_discovery.sh@183 -- # sleep 5 00:23:28.041 [2024-12-01 15:04:00.922364] bdev_mdns_client.c: 395:mdns_browse_handler: *INFO*: (Browser) CACHE_EXHAUSTED 00:23:28.041 [2024-12-01 15:04:01.022360] bdev_mdns_client.c: 395:mdns_browse_handler: *INFO*: (Browser) ALL_FOR_NOW 00:23:28.041 [2024-12-01 15:04:01.122364] bdev_mdns_client.c: 254:mdns_resolve_handler: *INFO*: Service 'CDC' of type '_nvme-disc._tcp' in domain 'local' 00:23:28.041 [2024-12-01 15:04:01.122380] bdev_mdns_client.c: 259:mdns_resolve_handler: *INFO*: fedora39-cloud-1721788873-2326.local:8009 (10.0.0.3) 00:23:28.041 TXT="p=tcp" "NQN=nqn.2014-08.org.nvmexpress.discovery" 00:23:28.041 cookie is 0 00:23:28.041 is_local: 1 00:23:28.041 our_own: 0 00:23:28.041 wide_area: 0 00:23:28.041 multicast: 1 00:23:28.041 cached: 1 00:23:28.298 [2024-12-01 15:04:01.222369] bdev_mdns_client.c: 254:mdns_resolve_handler: *INFO*: Service 'CDC' of type '_nvme-disc._tcp' in domain 'local' 00:23:28.298 [2024-12-01 15:04:01.222392] bdev_mdns_client.c: 259:mdns_resolve_handler: *INFO*: fedora39-cloud-1721788873-2326.local:8009 (10.0.0.2) 00:23:28.298 TXT="p=tcp" "NQN=nqn.2014-08.org.nvmexpress.discovery" 00:23:28.298 cookie is 0 00:23:28.298 is_local: 1 00:23:28.298 our_own: 0 00:23:28.298 wide_area: 0 00:23:28.298 multicast: 1 00:23:28.298 cached: 1 00:23:29.231 [2024-12-01 15:04:02.125926] bdev_nvme.c:6759:discovery_attach_cb: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr attached 00:23:29.231 [2024-12-01 15:04:02.125946] bdev_nvme.c:6839:discovery_poller: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr connected 00:23:29.231 [2024-12-01 15:04:02.125961] bdev_nvme.c:6722:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:23:29.231 [2024-12-01 15:04:02.212007] bdev_nvme.c:6688:discovery_log_page_cb: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4421 new subsystem mdns0_nvme0 00:23:29.231 [2024-12-01 15:04:02.225894] bdev_nvme.c:6759:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:23:29.231 [2024-12-01 15:04:02.225912] bdev_nvme.c:6839:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:23:29.231 [2024-12-01 15:04:02.225926] bdev_nvme.c:6722:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:23:29.231 [2024-12-01 15:04:02.272356] bdev_nvme.c:6578:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.3:8009] attach mdns0_nvme0 done 00:23:29.231 [2024-12-01 15:04:02.272382] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4421 found again 00:23:29.231 [2024-12-01 15:04:02.312606] bdev_nvme.c:6688:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new subsystem mdns1_nvme0 00:23:29.489 [2024-12-01 15:04:02.371119] bdev_nvme.c:6578:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach mdns1_nvme0 done 00:23:29.489 [2024-12-01 15:04:02.371144] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:23:32.776 15:04:05 -- host/mdns_discovery.sh@185 -- # get_mdns_discovery_svcs 00:23:32.776 15:04:05 -- host/mdns_discovery.sh@80 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_mdns_discovery_info 00:23:32.776 15:04:05 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:32.776 15:04:05 -- common/autotest_common.sh@10 -- # set +x 00:23:32.776 15:04:05 -- host/mdns_discovery.sh@80 -- # jq -r '.[].name' 00:23:32.776 15:04:05 -- host/mdns_discovery.sh@80 -- # sort 00:23:32.776 15:04:05 -- host/mdns_discovery.sh@80 -- # xargs 00:23:32.776 15:04:05 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:32.776 15:04:05 -- host/mdns_discovery.sh@185 -- # [[ mdns == \m\d\n\s ]] 00:23:32.776 15:04:05 -- host/mdns_discovery.sh@186 -- # get_discovery_ctrlrs 00:23:32.776 15:04:05 -- host/mdns_discovery.sh@76 -- # jq -r '.[].name' 00:23:32.776 15:04:05 -- host/mdns_discovery.sh@76 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:23:32.776 15:04:05 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:32.776 15:04:05 -- host/mdns_discovery.sh@76 -- # sort 00:23:32.776 15:04:05 -- host/mdns_discovery.sh@76 -- # xargs 00:23:32.776 15:04:05 -- common/autotest_common.sh@10 -- # set +x 00:23:32.776 15:04:05 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:32.777 15:04:05 -- host/mdns_discovery.sh@186 -- # [[ mdns0_nvme mdns1_nvme == \m\d\n\s\0\_\n\v\m\e\ \m\d\n\s\1\_\n\v\m\e ]] 00:23:32.777 15:04:05 -- host/mdns_discovery.sh@187 -- # get_bdev_list 00:23:32.777 15:04:05 -- host/mdns_discovery.sh@64 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:32.777 15:04:05 -- host/mdns_discovery.sh@64 -- # jq -r '.[].name' 00:23:32.777 15:04:05 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:32.777 15:04:05 -- common/autotest_common.sh@10 -- # set +x 00:23:32.777 15:04:05 -- host/mdns_discovery.sh@64 -- # sort 00:23:32.777 15:04:05 -- host/mdns_discovery.sh@64 -- # xargs 00:23:32.777 15:04:05 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:32.777 15:04:05 -- host/mdns_discovery.sh@187 -- # [[ mdns0_nvme0n1 mdns0_nvme0n2 mdns1_nvme0n1 mdns1_nvme0n2 == \m\d\n\s\0\_\n\v\m\e\0\n\1\ \m\d\n\s\0\_\n\v\m\e\0\n\2\ \m\d\n\s\1\_\n\v\m\e\0\n\1\ \m\d\n\s\1\_\n\v\m\e\0\n\2 ]] 00:23:32.777 15:04:05 -- host/mdns_discovery.sh@190 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_mdns_discovery -b cdc -s _nvme-disc._tcp -q nqn.2021-12.io.spdk:test 00:23:32.777 15:04:05 -- common/autotest_common.sh@650 -- # local es=0 00:23:32.777 15:04:05 -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_mdns_discovery -b cdc -s _nvme-disc._tcp -q nqn.2021-12.io.spdk:test 00:23:32.777 15:04:05 -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:23:32.777 15:04:05 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:32.777 15:04:05 -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:23:32.777 15:04:05 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:32.777 15:04:05 -- common/autotest_common.sh@653 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_mdns_discovery -b cdc -s _nvme-disc._tcp -q nqn.2021-12.io.spdk:test 00:23:32.777 15:04:05 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:32.777 15:04:05 -- common/autotest_common.sh@10 -- # set +x 00:23:32.777 [2024-12-01 15:04:05.716091] bdev_mdns_client.c: 475:bdev_nvme_start_mdns_discovery: *ERROR*: mDNS discovery already running for service _nvme-disc._tcp 00:23:32.777 2024/12/01 15:04:05 error on JSON-RPC call, method: bdev_nvme_start_mdns_discovery, params: map[hostnqn:nqn.2021-12.io.spdk:test name:cdc svcname:_nvme-disc._tcp], err: error received for bdev_nvme_start_mdns_discovery method, err: Code=-17 Msg=File exists 00:23:32.777 request: 00:23:32.777 { 00:23:32.777 "method": "bdev_nvme_start_mdns_discovery", 00:23:32.777 "params": { 00:23:32.777 "name": "cdc", 00:23:32.777 "svcname": "_nvme-disc._tcp", 00:23:32.777 "hostnqn": "nqn.2021-12.io.spdk:test" 00:23:32.777 } 00:23:32.777 } 00:23:32.777 Got JSON-RPC error response 00:23:32.777 GoRPCClient: error on JSON-RPC call 00:23:32.777 15:04:05 -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:23:32.777 15:04:05 -- common/autotest_common.sh@653 -- # es=1 00:23:32.777 15:04:05 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:23:32.777 15:04:05 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:23:32.777 15:04:05 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:23:32.777 15:04:05 -- host/mdns_discovery.sh@191 -- # get_discovery_ctrlrs 00:23:32.777 15:04:05 -- host/mdns_discovery.sh@76 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:23:32.777 15:04:05 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:32.777 15:04:05 -- common/autotest_common.sh@10 -- # set +x 00:23:32.777 15:04:05 -- host/mdns_discovery.sh@76 -- # sort 00:23:32.777 15:04:05 -- host/mdns_discovery.sh@76 -- # jq -r '.[].name' 00:23:32.777 15:04:05 -- host/mdns_discovery.sh@76 -- # xargs 00:23:32.777 15:04:05 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:32.777 15:04:05 -- host/mdns_discovery.sh@191 -- # [[ mdns0_nvme mdns1_nvme == \m\d\n\s\0\_\n\v\m\e\ \m\d\n\s\1\_\n\v\m\e ]] 00:23:32.777 15:04:05 -- host/mdns_discovery.sh@192 -- # get_bdev_list 00:23:32.777 15:04:05 -- host/mdns_discovery.sh@64 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:32.777 15:04:05 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:32.777 15:04:05 -- host/mdns_discovery.sh@64 -- # jq -r '.[].name' 00:23:32.777 15:04:05 -- common/autotest_common.sh@10 -- # set +x 00:23:32.777 15:04:05 -- host/mdns_discovery.sh@64 -- # sort 00:23:32.777 15:04:05 -- host/mdns_discovery.sh@64 -- # xargs 00:23:32.777 15:04:05 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:32.777 15:04:05 -- host/mdns_discovery.sh@192 -- # [[ mdns0_nvme0n1 mdns0_nvme0n2 mdns1_nvme0n1 mdns1_nvme0n2 == \m\d\n\s\0\_\n\v\m\e\0\n\1\ \m\d\n\s\0\_\n\v\m\e\0\n\2\ \m\d\n\s\1\_\n\v\m\e\0\n\1\ \m\d\n\s\1\_\n\v\m\e\0\n\2 ]] 00:23:32.777 15:04:05 -- host/mdns_discovery.sh@193 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_mdns_discovery -b mdns 00:23:32.777 15:04:05 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:32.777 15:04:05 -- common/autotest_common.sh@10 -- # set +x 00:23:32.777 15:04:05 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:32.777 15:04:05 -- host/mdns_discovery.sh@195 -- # trap - SIGINT SIGTERM EXIT 00:23:32.777 15:04:05 -- host/mdns_discovery.sh@197 -- # kill 98424 00:23:32.777 15:04:05 -- host/mdns_discovery.sh@200 -- # wait 98424 00:23:33.035 [2024-12-01 15:04:05.940016] bdev_mdns_client.c: 424:bdev_nvme_avahi_iterate: *INFO*: Stopping avahi poller for service _nvme-disc._tcp 00:23:33.035 15:04:06 -- host/mdns_discovery.sh@201 -- # kill 98515 00:23:33.035 15:04:06 -- host/mdns_discovery.sh@202 -- # kill 98454 00:23:33.035 Got SIGTERM, quitting. 00:23:33.035 Got SIGTERM, quitting. 00:23:33.035 15:04:06 -- host/mdns_discovery.sh@203 -- # nvmftestfini 00:23:33.035 15:04:06 -- nvmf/common.sh@476 -- # nvmfcleanup 00:23:33.035 15:04:06 -- nvmf/common.sh@116 -- # sync 00:23:33.035 Leaving mDNS multicast group on interface nvmf_tgt_if2.IPv4 with address 10.0.0.3. 00:23:33.035 Leaving mDNS multicast group on interface nvmf_tgt_if.IPv4 with address 10.0.0.2. 00:23:33.035 avahi-daemon 0.8 exiting. 00:23:33.035 15:04:06 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:23:33.035 15:04:06 -- nvmf/common.sh@119 -- # set +e 00:23:33.035 15:04:06 -- nvmf/common.sh@120 -- # for i in {1..20} 00:23:33.035 15:04:06 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:23:33.035 rmmod nvme_tcp 00:23:33.035 rmmod nvme_fabrics 00:23:33.035 rmmod nvme_keyring 00:23:33.035 15:04:06 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:23:33.035 15:04:06 -- nvmf/common.sh@123 -- # set -e 00:23:33.035 15:04:06 -- nvmf/common.sh@124 -- # return 0 00:23:33.035 15:04:06 -- nvmf/common.sh@477 -- # '[' -n 98374 ']' 00:23:33.035 15:04:06 -- nvmf/common.sh@478 -- # killprocess 98374 00:23:33.035 15:04:06 -- common/autotest_common.sh@936 -- # '[' -z 98374 ']' 00:23:33.035 15:04:06 -- common/autotest_common.sh@940 -- # kill -0 98374 00:23:33.035 15:04:06 -- common/autotest_common.sh@941 -- # uname 00:23:33.035 15:04:06 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:23:33.035 15:04:06 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 98374 00:23:33.294 15:04:06 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:23:33.294 15:04:06 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:23:33.294 killing process with pid 98374 00:23:33.294 15:04:06 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 98374' 00:23:33.294 15:04:06 -- common/autotest_common.sh@955 -- # kill 98374 00:23:33.294 15:04:06 -- common/autotest_common.sh@960 -- # wait 98374 00:23:33.553 15:04:06 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:23:33.553 15:04:06 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:23:33.553 15:04:06 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:23:33.553 15:04:06 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:23:33.553 15:04:06 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:23:33.553 15:04:06 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:33.553 15:04:06 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:33.553 15:04:06 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:33.553 15:04:06 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:23:33.553 00:23:33.553 real 0m20.725s 00:23:33.553 user 0m40.404s 00:23:33.553 sys 0m1.986s 00:23:33.553 ************************************ 00:23:33.553 END TEST nvmf_mdns_discovery 00:23:33.553 ************************************ 00:23:33.553 15:04:06 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:23:33.553 15:04:06 -- common/autotest_common.sh@10 -- # set +x 00:23:33.553 15:04:06 -- nvmf/nvmf.sh@115 -- # [[ 1 -eq 1 ]] 00:23:33.553 15:04:06 -- nvmf/nvmf.sh@116 -- # run_test nvmf_multipath /home/vagrant/spdk_repo/spdk/test/nvmf/host/multipath.sh --transport=tcp 00:23:33.553 15:04:06 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:23:33.553 15:04:06 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:23:33.553 15:04:06 -- common/autotest_common.sh@10 -- # set +x 00:23:33.553 ************************************ 00:23:33.553 START TEST nvmf_multipath 00:23:33.553 ************************************ 00:23:33.553 15:04:06 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/multipath.sh --transport=tcp 00:23:33.553 * Looking for test storage... 00:23:33.553 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:23:33.553 15:04:06 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:23:33.553 15:04:06 -- common/autotest_common.sh@1690 -- # lcov --version 00:23:33.553 15:04:06 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:23:33.553 15:04:06 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:23:33.553 15:04:06 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:23:33.553 15:04:06 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:23:33.553 15:04:06 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:23:33.553 15:04:06 -- scripts/common.sh@335 -- # IFS=.-: 00:23:33.553 15:04:06 -- scripts/common.sh@335 -- # read -ra ver1 00:23:33.553 15:04:06 -- scripts/common.sh@336 -- # IFS=.-: 00:23:33.553 15:04:06 -- scripts/common.sh@336 -- # read -ra ver2 00:23:33.553 15:04:06 -- scripts/common.sh@337 -- # local 'op=<' 00:23:33.553 15:04:06 -- scripts/common.sh@339 -- # ver1_l=2 00:23:33.553 15:04:06 -- scripts/common.sh@340 -- # ver2_l=1 00:23:33.553 15:04:06 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:23:33.553 15:04:06 -- scripts/common.sh@343 -- # case "$op" in 00:23:33.553 15:04:06 -- scripts/common.sh@344 -- # : 1 00:23:33.553 15:04:06 -- scripts/common.sh@363 -- # (( v = 0 )) 00:23:33.553 15:04:06 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:33.553 15:04:06 -- scripts/common.sh@364 -- # decimal 1 00:23:33.812 15:04:06 -- scripts/common.sh@352 -- # local d=1 00:23:33.812 15:04:06 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:33.812 15:04:06 -- scripts/common.sh@354 -- # echo 1 00:23:33.812 15:04:06 -- scripts/common.sh@364 -- # ver1[v]=1 00:23:33.812 15:04:06 -- scripts/common.sh@365 -- # decimal 2 00:23:33.812 15:04:06 -- scripts/common.sh@352 -- # local d=2 00:23:33.812 15:04:06 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:33.812 15:04:06 -- scripts/common.sh@354 -- # echo 2 00:23:33.812 15:04:06 -- scripts/common.sh@365 -- # ver2[v]=2 00:23:33.812 15:04:06 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:23:33.812 15:04:06 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:23:33.812 15:04:06 -- scripts/common.sh@367 -- # return 0 00:23:33.812 15:04:06 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:33.812 15:04:06 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:23:33.812 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:33.812 --rc genhtml_branch_coverage=1 00:23:33.812 --rc genhtml_function_coverage=1 00:23:33.812 --rc genhtml_legend=1 00:23:33.812 --rc geninfo_all_blocks=1 00:23:33.812 --rc geninfo_unexecuted_blocks=1 00:23:33.812 00:23:33.812 ' 00:23:33.812 15:04:06 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:23:33.812 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:33.812 --rc genhtml_branch_coverage=1 00:23:33.812 --rc genhtml_function_coverage=1 00:23:33.812 --rc genhtml_legend=1 00:23:33.812 --rc geninfo_all_blocks=1 00:23:33.812 --rc geninfo_unexecuted_blocks=1 00:23:33.812 00:23:33.812 ' 00:23:33.812 15:04:06 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:23:33.812 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:33.812 --rc genhtml_branch_coverage=1 00:23:33.812 --rc genhtml_function_coverage=1 00:23:33.812 --rc genhtml_legend=1 00:23:33.812 --rc geninfo_all_blocks=1 00:23:33.812 --rc geninfo_unexecuted_blocks=1 00:23:33.812 00:23:33.812 ' 00:23:33.812 15:04:06 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:23:33.812 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:33.812 --rc genhtml_branch_coverage=1 00:23:33.812 --rc genhtml_function_coverage=1 00:23:33.812 --rc genhtml_legend=1 00:23:33.812 --rc geninfo_all_blocks=1 00:23:33.812 --rc geninfo_unexecuted_blocks=1 00:23:33.812 00:23:33.812 ' 00:23:33.812 15:04:06 -- host/multipath.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:23:33.812 15:04:06 -- nvmf/common.sh@7 -- # uname -s 00:23:33.812 15:04:06 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:33.812 15:04:06 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:33.812 15:04:06 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:33.812 15:04:06 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:33.812 15:04:06 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:33.812 15:04:06 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:33.812 15:04:06 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:33.812 15:04:06 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:33.812 15:04:06 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:33.812 15:04:06 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:33.812 15:04:06 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:2d843004-a791-47f3-8dd7-3d04462c368b 00:23:33.812 15:04:06 -- nvmf/common.sh@18 -- # NVME_HOSTID=2d843004-a791-47f3-8dd7-3d04462c368b 00:23:33.812 15:04:06 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:33.812 15:04:06 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:33.812 15:04:06 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:23:33.812 15:04:06 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:23:33.813 15:04:06 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:33.813 15:04:06 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:33.813 15:04:06 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:33.813 15:04:06 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:33.813 15:04:06 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:33.813 15:04:06 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:33.813 15:04:06 -- paths/export.sh@5 -- # export PATH 00:23:33.813 15:04:06 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:33.813 15:04:06 -- nvmf/common.sh@46 -- # : 0 00:23:33.813 15:04:06 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:23:33.813 15:04:06 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:23:33.813 15:04:06 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:23:33.813 15:04:06 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:33.813 15:04:06 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:33.813 15:04:06 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:23:33.813 15:04:06 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:23:33.813 15:04:06 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:23:33.813 15:04:06 -- host/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:23:33.813 15:04:06 -- host/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:23:33.813 15:04:06 -- host/multipath.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:23:33.813 15:04:06 -- host/multipath.sh@15 -- # bpf_sh=/home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 00:23:33.813 15:04:06 -- host/multipath.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:23:33.813 15:04:06 -- host/multipath.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:23:33.813 15:04:06 -- host/multipath.sh@30 -- # nvmftestinit 00:23:33.813 15:04:06 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:23:33.813 15:04:06 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:33.813 15:04:06 -- nvmf/common.sh@436 -- # prepare_net_devs 00:23:33.813 15:04:06 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:23:33.813 15:04:06 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:23:33.813 15:04:06 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:33.813 15:04:06 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:33.813 15:04:06 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:33.813 15:04:06 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:23:33.813 15:04:06 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:23:33.813 15:04:06 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:23:33.813 15:04:06 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:23:33.813 15:04:06 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:23:33.813 15:04:06 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:23:33.813 15:04:06 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:33.813 15:04:06 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:33.813 15:04:06 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:23:33.813 15:04:06 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:23:33.813 15:04:06 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:23:33.813 15:04:06 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:23:33.813 15:04:06 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:23:33.813 15:04:06 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:33.813 15:04:06 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:23:33.813 15:04:06 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:23:33.813 15:04:06 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:23:33.813 15:04:06 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:23:33.813 15:04:06 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:23:33.813 15:04:06 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:23:33.813 Cannot find device "nvmf_tgt_br" 00:23:33.813 15:04:06 -- nvmf/common.sh@154 -- # true 00:23:33.813 15:04:06 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:23:33.813 Cannot find device "nvmf_tgt_br2" 00:23:33.813 15:04:06 -- nvmf/common.sh@155 -- # true 00:23:33.813 15:04:06 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:23:33.813 15:04:06 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:23:33.813 Cannot find device "nvmf_tgt_br" 00:23:33.813 15:04:06 -- nvmf/common.sh@157 -- # true 00:23:33.813 15:04:06 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:23:33.813 Cannot find device "nvmf_tgt_br2" 00:23:33.813 15:04:06 -- nvmf/common.sh@158 -- # true 00:23:33.813 15:04:06 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:23:33.813 15:04:06 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:23:33.813 15:04:06 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:23:33.813 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:23:33.813 15:04:06 -- nvmf/common.sh@161 -- # true 00:23:33.813 15:04:06 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:23:33.813 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:23:33.813 15:04:06 -- nvmf/common.sh@162 -- # true 00:23:33.813 15:04:06 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:23:33.813 15:04:06 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:23:33.813 15:04:06 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:23:33.813 15:04:06 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:23:33.813 15:04:06 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:23:33.813 15:04:06 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:23:33.813 15:04:06 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:23:33.813 15:04:06 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:23:34.072 15:04:06 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:23:34.072 15:04:06 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:23:34.072 15:04:06 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:23:34.072 15:04:06 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:23:34.072 15:04:06 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:23:34.072 15:04:06 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:23:34.072 15:04:06 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:23:34.072 15:04:06 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:23:34.072 15:04:06 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:23:34.072 15:04:06 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:23:34.072 15:04:06 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:23:34.072 15:04:06 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:23:34.072 15:04:07 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:23:34.072 15:04:07 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:23:34.072 15:04:07 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:23:34.072 15:04:07 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:23:34.072 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:34.072 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.061 ms 00:23:34.072 00:23:34.072 --- 10.0.0.2 ping statistics --- 00:23:34.072 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:34.072 rtt min/avg/max/mdev = 0.061/0.061/0.061/0.000 ms 00:23:34.072 15:04:07 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:23:34.072 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:23:34.072 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.040 ms 00:23:34.072 00:23:34.072 --- 10.0.0.3 ping statistics --- 00:23:34.072 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:34.072 rtt min/avg/max/mdev = 0.040/0.040/0.040/0.000 ms 00:23:34.072 15:04:07 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:23:34.072 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:34.072 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.028 ms 00:23:34.072 00:23:34.072 --- 10.0.0.1 ping statistics --- 00:23:34.072 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:34.072 rtt min/avg/max/mdev = 0.028/0.028/0.028/0.000 ms 00:23:34.072 15:04:07 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:34.072 15:04:07 -- nvmf/common.sh@421 -- # return 0 00:23:34.072 15:04:07 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:23:34.072 15:04:07 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:34.072 15:04:07 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:23:34.072 15:04:07 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:23:34.072 15:04:07 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:34.072 15:04:07 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:23:34.072 15:04:07 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:23:34.072 15:04:07 -- host/multipath.sh@32 -- # nvmfappstart -m 0x3 00:23:34.072 15:04:07 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:23:34.072 15:04:07 -- common/autotest_common.sh@722 -- # xtrace_disable 00:23:34.072 15:04:07 -- common/autotest_common.sh@10 -- # set +x 00:23:34.072 15:04:07 -- nvmf/common.sh@469 -- # nvmfpid=99032 00:23:34.072 15:04:07 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:23:34.072 15:04:07 -- nvmf/common.sh@470 -- # waitforlisten 99032 00:23:34.072 15:04:07 -- common/autotest_common.sh@829 -- # '[' -z 99032 ']' 00:23:34.072 15:04:07 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:34.072 15:04:07 -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:34.072 15:04:07 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:34.072 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:34.072 15:04:07 -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:34.072 15:04:07 -- common/autotest_common.sh@10 -- # set +x 00:23:34.072 [2024-12-01 15:04:07.125326] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:23:34.072 [2024-12-01 15:04:07.125420] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:34.330 [2024-12-01 15:04:07.266801] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:23:34.330 [2024-12-01 15:04:07.322952] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:23:34.330 [2024-12-01 15:04:07.323086] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:34.330 [2024-12-01 15:04:07.323097] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:34.330 [2024-12-01 15:04:07.323105] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:34.330 [2024-12-01 15:04:07.323689] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:23:34.330 [2024-12-01 15:04:07.323788] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:23:35.267 15:04:08 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:35.267 15:04:08 -- common/autotest_common.sh@862 -- # return 0 00:23:35.267 15:04:08 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:23:35.267 15:04:08 -- common/autotest_common.sh@728 -- # xtrace_disable 00:23:35.267 15:04:08 -- common/autotest_common.sh@10 -- # set +x 00:23:35.267 15:04:08 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:35.267 15:04:08 -- host/multipath.sh@33 -- # nvmfapp_pid=99032 00:23:35.267 15:04:08 -- host/multipath.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:23:35.524 [2024-12-01 15:04:08.468493] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:35.524 15:04:08 -- host/multipath.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:23:35.782 Malloc0 00:23:35.782 15:04:08 -- host/multipath.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:23:36.043 15:04:08 -- host/multipath.sh@39 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:23:36.314 15:04:09 -- host/multipath.sh@40 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:36.314 [2024-12-01 15:04:09.348956] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:36.314 15:04:09 -- host/multipath.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:23:36.587 [2024-12-01 15:04:09.569089] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:23:36.587 15:04:09 -- host/multipath.sh@44 -- # bdevperf_pid=99137 00:23:36.587 15:04:09 -- host/multipath.sh@43 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:23:36.587 15:04:09 -- host/multipath.sh@46 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:23:36.587 15:04:09 -- host/multipath.sh@47 -- # waitforlisten 99137 /var/tmp/bdevperf.sock 00:23:36.587 15:04:09 -- common/autotest_common.sh@829 -- # '[' -z 99137 ']' 00:23:36.587 15:04:09 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:36.587 15:04:09 -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:36.587 15:04:09 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:36.587 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:36.587 15:04:09 -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:36.587 15:04:09 -- common/autotest_common.sh@10 -- # set +x 00:23:37.583 15:04:10 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:37.583 15:04:10 -- common/autotest_common.sh@862 -- # return 0 00:23:37.583 15:04:10 -- host/multipath.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:23:37.840 15:04:10 -- host/multipath.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -l -1 -o 10 00:23:38.098 Nvme0n1 00:23:38.357 15:04:11 -- host/multipath.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:23:38.615 Nvme0n1 00:23:38.615 15:04:11 -- host/multipath.sh@78 -- # sleep 1 00:23:38.615 15:04:11 -- host/multipath.sh@76 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:23:39.550 15:04:12 -- host/multipath.sh@81 -- # set_ANA_state non_optimized optimized 00:23:39.550 15:04:12 -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:23:39.808 15:04:12 -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:23:40.066 15:04:13 -- host/multipath.sh@83 -- # confirm_io_on_port optimized 4421 00:23:40.066 15:04:13 -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 99032 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:23:40.066 15:04:13 -- host/multipath.sh@65 -- # dtrace_pid=99223 00:23:40.066 15:04:13 -- host/multipath.sh@66 -- # sleep 6 00:23:46.646 15:04:19 -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:23:46.646 15:04:19 -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:23:46.646 15:04:19 -- host/multipath.sh@67 -- # active_port=4421 00:23:46.646 15:04:19 -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:23:46.646 Attaching 4 probes... 00:23:46.646 @path[10.0.0.2, 4421]: 20911 00:23:46.646 @path[10.0.0.2, 4421]: 21463 00:23:46.646 @path[10.0.0.2, 4421]: 21263 00:23:46.646 @path[10.0.0.2, 4421]: 21509 00:23:46.646 @path[10.0.0.2, 4421]: 21434 00:23:46.646 15:04:19 -- host/multipath.sh@69 -- # cut -d ']' -f1 00:23:46.646 15:04:19 -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:23:46.646 15:04:19 -- host/multipath.sh@69 -- # sed -n 1p 00:23:46.646 15:04:19 -- host/multipath.sh@69 -- # port=4421 00:23:46.646 15:04:19 -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:23:46.646 15:04:19 -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:23:46.646 15:04:19 -- host/multipath.sh@72 -- # kill 99223 00:23:46.646 15:04:19 -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:23:46.646 15:04:19 -- host/multipath.sh@86 -- # set_ANA_state non_optimized inaccessible 00:23:46.646 15:04:19 -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:23:46.646 15:04:19 -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:23:46.905 15:04:19 -- host/multipath.sh@87 -- # confirm_io_on_port non_optimized 4420 00:23:46.905 15:04:19 -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 99032 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:23:46.905 15:04:19 -- host/multipath.sh@65 -- # dtrace_pid=99356 00:23:46.905 15:04:19 -- host/multipath.sh@66 -- # sleep 6 00:23:53.458 15:04:25 -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:23:53.458 15:04:25 -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="non_optimized") | .address.trsvcid' 00:23:53.458 15:04:26 -- host/multipath.sh@67 -- # active_port=4420 00:23:53.458 15:04:26 -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:23:53.458 Attaching 4 probes... 00:23:53.458 @path[10.0.0.2, 4420]: 21846 00:23:53.458 @path[10.0.0.2, 4420]: 22219 00:23:53.458 @path[10.0.0.2, 4420]: 22069 00:23:53.458 @path[10.0.0.2, 4420]: 22228 00:23:53.458 @path[10.0.0.2, 4420]: 22228 00:23:53.458 15:04:26 -- host/multipath.sh@69 -- # cut -d ']' -f1 00:23:53.458 15:04:26 -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:23:53.458 15:04:26 -- host/multipath.sh@69 -- # sed -n 1p 00:23:53.458 15:04:26 -- host/multipath.sh@69 -- # port=4420 00:23:53.458 15:04:26 -- host/multipath.sh@70 -- # [[ 4420 == \4\4\2\0 ]] 00:23:53.458 15:04:26 -- host/multipath.sh@71 -- # [[ 4420 == \4\4\2\0 ]] 00:23:53.458 15:04:26 -- host/multipath.sh@72 -- # kill 99356 00:23:53.458 15:04:26 -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:23:53.458 15:04:26 -- host/multipath.sh@89 -- # set_ANA_state inaccessible optimized 00:23:53.458 15:04:26 -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:23:53.458 15:04:26 -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:23:53.734 15:04:26 -- host/multipath.sh@90 -- # confirm_io_on_port optimized 4421 00:23:53.734 15:04:26 -- host/multipath.sh@65 -- # dtrace_pid=99486 00:23:53.734 15:04:26 -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 99032 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:23:53.734 15:04:26 -- host/multipath.sh@66 -- # sleep 6 00:24:00.285 15:04:32 -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:24:00.285 15:04:32 -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:24:00.285 15:04:32 -- host/multipath.sh@67 -- # active_port=4421 00:24:00.285 15:04:32 -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:24:00.285 Attaching 4 probes... 00:24:00.285 @path[10.0.0.2, 4421]: 16078 00:24:00.285 @path[10.0.0.2, 4421]: 21178 00:24:00.285 @path[10.0.0.2, 4421]: 20975 00:24:00.285 @path[10.0.0.2, 4421]: 21060 00:24:00.285 @path[10.0.0.2, 4421]: 21321 00:24:00.285 15:04:32 -- host/multipath.sh@69 -- # cut -d ']' -f1 00:24:00.285 15:04:32 -- host/multipath.sh@69 -- # sed -n 1p 00:24:00.285 15:04:32 -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:24:00.285 15:04:32 -- host/multipath.sh@69 -- # port=4421 00:24:00.285 15:04:32 -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:24:00.285 15:04:32 -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:24:00.285 15:04:32 -- host/multipath.sh@72 -- # kill 99486 00:24:00.285 15:04:32 -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:24:00.285 15:04:32 -- host/multipath.sh@93 -- # set_ANA_state inaccessible inaccessible 00:24:00.285 15:04:32 -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:24:00.285 15:04:33 -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:24:00.543 15:04:33 -- host/multipath.sh@94 -- # confirm_io_on_port '' '' 00:24:00.543 15:04:33 -- host/multipath.sh@65 -- # dtrace_pid=99621 00:24:00.543 15:04:33 -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 99032 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:24:00.543 15:04:33 -- host/multipath.sh@66 -- # sleep 6 00:24:07.099 15:04:39 -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:24:07.099 15:04:39 -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="") | .address.trsvcid' 00:24:07.099 15:04:39 -- host/multipath.sh@67 -- # active_port= 00:24:07.099 15:04:39 -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:24:07.099 Attaching 4 probes... 00:24:07.099 00:24:07.099 00:24:07.099 00:24:07.099 00:24:07.099 00:24:07.099 15:04:39 -- host/multipath.sh@69 -- # cut -d ']' -f1 00:24:07.099 15:04:39 -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:24:07.099 15:04:39 -- host/multipath.sh@69 -- # sed -n 1p 00:24:07.099 15:04:39 -- host/multipath.sh@69 -- # port= 00:24:07.099 15:04:39 -- host/multipath.sh@70 -- # [[ '' == '' ]] 00:24:07.099 15:04:39 -- host/multipath.sh@71 -- # [[ '' == '' ]] 00:24:07.099 15:04:39 -- host/multipath.sh@72 -- # kill 99621 00:24:07.099 15:04:39 -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:24:07.099 15:04:39 -- host/multipath.sh@96 -- # set_ANA_state non_optimized optimized 00:24:07.099 15:04:39 -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:24:07.099 15:04:40 -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:24:07.357 15:04:40 -- host/multipath.sh@97 -- # confirm_io_on_port optimized 4421 00:24:07.357 15:04:40 -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 99032 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:24:07.357 15:04:40 -- host/multipath.sh@65 -- # dtrace_pid=99754 00:24:07.357 15:04:40 -- host/multipath.sh@66 -- # sleep 6 00:24:13.916 15:04:46 -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:24:13.917 15:04:46 -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:24:13.917 15:04:46 -- host/multipath.sh@67 -- # active_port=4421 00:24:13.917 15:04:46 -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:24:13.917 Attaching 4 probes... 00:24:13.917 @path[10.0.0.2, 4421]: 21702 00:24:13.917 @path[10.0.0.2, 4421]: 21750 00:24:13.917 @path[10.0.0.2, 4421]: 21942 00:24:13.917 @path[10.0.0.2, 4421]: 21924 00:24:13.917 @path[10.0.0.2, 4421]: 22066 00:24:13.917 15:04:46 -- host/multipath.sh@69 -- # cut -d ']' -f1 00:24:13.917 15:04:46 -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:24:13.917 15:04:46 -- host/multipath.sh@69 -- # sed -n 1p 00:24:13.917 15:04:46 -- host/multipath.sh@69 -- # port=4421 00:24:13.917 15:04:46 -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:24:13.917 15:04:46 -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:24:13.917 15:04:46 -- host/multipath.sh@72 -- # kill 99754 00:24:13.917 15:04:46 -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:24:13.917 15:04:46 -- host/multipath.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:24:13.917 [2024-12-01 15:04:46.774479] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xebd370 is same with the state(5) to be set 00:24:13.917 [2024-12-01 15:04:46.777190] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xebd370 is same with the state(5) to be set 00:24:13.917 [2024-12-01 15:04:46.777241] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xebd370 is same with the state(5) to be set 00:24:13.917 [2024-12-01 15:04:46.777251] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xebd370 is same with the state(5) to be set 00:24:13.917 [2024-12-01 15:04:46.777258] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xebd370 is same with the state(5) to be set 00:24:13.917 [2024-12-01 15:04:46.777267] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xebd370 is same with the state(5) to be set 00:24:13.917 [2024-12-01 15:04:46.777275] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xebd370 is same with the state(5) to be set 00:24:13.917 [2024-12-01 15:04:46.777283] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xebd370 is same with the state(5) to be set 00:24:13.917 [2024-12-01 15:04:46.777291] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xebd370 is same with the state(5) to be set 00:24:13.917 [2024-12-01 15:04:46.777298] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xebd370 is same with the state(5) to be set 00:24:13.917 [2024-12-01 15:04:46.777306] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xebd370 is same with the state(5) to be set 00:24:13.917 [2024-12-01 15:04:46.777313] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xebd370 is same with the state(5) to be set 00:24:13.917 [2024-12-01 15:04:46.777321] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xebd370 is same with the state(5) to be set 00:24:13.917 [2024-12-01 15:04:46.777328] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xebd370 is same with the state(5) to be set 00:24:13.917 [2024-12-01 15:04:46.777335] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xebd370 is same with the state(5) to be set 00:24:13.917 [2024-12-01 15:04:46.777342] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xebd370 is same with the state(5) to be set 00:24:13.917 [2024-12-01 15:04:46.777349] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xebd370 is same with the state(5) to be set 00:24:13.917 [2024-12-01 15:04:46.777356] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xebd370 is same with the state(5) to be set 00:24:13.917 [2024-12-01 15:04:46.777363] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xebd370 is same with the state(5) to be set 00:24:13.917 [2024-12-01 15:04:46.777370] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xebd370 is same with the state(5) to be set 00:24:13.917 [2024-12-01 15:04:46.777377] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xebd370 is same with the state(5) to be set 00:24:13.917 [2024-12-01 15:04:46.777384] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xebd370 is same with the state(5) to be set 00:24:13.917 [2024-12-01 15:04:46.777391] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xebd370 is same with the state(5) to be set 00:24:13.917 [2024-12-01 15:04:46.777399] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xebd370 is same with the state(5) to be set 00:24:13.917 [2024-12-01 15:04:46.777406] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xebd370 is same with the state(5) to be set 00:24:13.917 [2024-12-01 15:04:46.777413] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xebd370 is same with the state(5) to be set 00:24:13.917 [2024-12-01 15:04:46.777420] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xebd370 is same with the state(5) to be set 00:24:13.917 [2024-12-01 15:04:46.777427] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xebd370 is same with the state(5) to be set 00:24:13.917 [2024-12-01 15:04:46.777434] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xebd370 is same with the state(5) to be set 00:24:13.917 [2024-12-01 15:04:46.777441] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xebd370 is same with the state(5) to be set 00:24:13.917 [2024-12-01 15:04:46.777448] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xebd370 is same with the state(5) to be set 00:24:13.917 [2024-12-01 15:04:46.777456] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xebd370 is same with the state(5) to be set 00:24:13.917 [2024-12-01 15:04:46.777463] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xebd370 is same with the state(5) to be set 00:24:13.917 [2024-12-01 15:04:46.777470] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xebd370 is same with the state(5) to be set 00:24:13.917 [2024-12-01 15:04:46.777493] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xebd370 is same with the state(5) to be set 00:24:13.917 [2024-12-01 15:04:46.777500] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xebd370 is same with the state(5) to be set 00:24:13.917 [2024-12-01 15:04:46.777523] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xebd370 is same with the state(5) to be set 00:24:13.917 [2024-12-01 15:04:46.777558] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xebd370 is same with the state(5) to be set 00:24:13.917 [2024-12-01 15:04:46.777567] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xebd370 is same with the state(5) to be set 00:24:13.917 [2024-12-01 15:04:46.777574] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xebd370 is same with the state(5) to be set 00:24:13.917 [2024-12-01 15:04:46.777582] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xebd370 is same with the state(5) to be set 00:24:13.917 [2024-12-01 15:04:46.777590] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xebd370 is same with the state(5) to be set 00:24:13.917 [2024-12-01 15:04:46.777598] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xebd370 is same with the state(5) to be set 00:24:13.917 [2024-12-01 15:04:46.777606] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xebd370 is same with the state(5) to be set 00:24:13.917 [2024-12-01 15:04:46.777614] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xebd370 is same with the state(5) to be set 00:24:13.917 [2024-12-01 15:04:46.777621] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xebd370 is same with the state(5) to be set 00:24:13.917 [2024-12-01 15:04:46.777629] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xebd370 is same with the state(5) to be set 00:24:13.917 [2024-12-01 15:04:46.777636] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xebd370 is same with the state(5) to be set 00:24:13.917 [2024-12-01 15:04:46.777644] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xebd370 is same with the state(5) to be set 00:24:13.917 [2024-12-01 15:04:46.777652] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xebd370 is same with the state(5) to be set 00:24:13.917 [2024-12-01 15:04:46.777660] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xebd370 is same with the state(5) to be set 00:24:13.917 [2024-12-01 15:04:46.777667] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xebd370 is same with the state(5) to be set 00:24:13.917 [2024-12-01 15:04:46.777675] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xebd370 is same with the state(5) to be set 00:24:13.917 [2024-12-01 15:04:46.777683] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xebd370 is same with the state(5) to be set 00:24:13.917 [2024-12-01 15:04:46.777690] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xebd370 is same with the state(5) to be set 00:24:13.917 [2024-12-01 15:04:46.777698] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xebd370 is same with the state(5) to be set 00:24:13.917 [2024-12-01 15:04:46.777705] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xebd370 is same with the state(5) to be set 00:24:13.917 [2024-12-01 15:04:46.777713] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xebd370 is same with the state(5) to be set 00:24:13.917 [2024-12-01 15:04:46.777721] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xebd370 is same with the state(5) to be set 00:24:13.917 [2024-12-01 15:04:46.777728] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xebd370 is same with the state(5) to be set 00:24:13.917 [2024-12-01 15:04:46.777736] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xebd370 is same with the state(5) to be set 00:24:13.917 [2024-12-01 15:04:46.777743] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xebd370 is same with the state(5) to be set 00:24:13.917 [2024-12-01 15:04:46.777752] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xebd370 is same with the state(5) to be set 00:24:13.917 [2024-12-01 15:04:46.777761] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xebd370 is same with the state(5) to be set 00:24:13.917 [2024-12-01 15:04:46.777780] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xebd370 is same with the state(5) to be set 00:24:13.917 [2024-12-01 15:04:46.777788] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xebd370 is same with the state(5) to be set 00:24:13.917 [2024-12-01 15:04:46.777799] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xebd370 is same with the state(5) to be set 00:24:13.917 [2024-12-01 15:04:46.777808] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xebd370 is same with the state(5) to be set 00:24:13.917 [2024-12-01 15:04:46.777833] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xebd370 is same with the state(5) to be set 00:24:13.917 [2024-12-01 15:04:46.777843] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xebd370 is same with the state(5) to be set 00:24:13.917 [2024-12-01 15:04:46.777851] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xebd370 is same with the state(5) to be set 00:24:13.917 [2024-12-01 15:04:46.777860] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xebd370 is same with the state(5) to be set 00:24:13.917 [2024-12-01 15:04:46.777869] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xebd370 is same with the state(5) to be set 00:24:13.917 [2024-12-01 15:04:46.777878] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xebd370 is same with the state(5) to be set 00:24:13.918 [2024-12-01 15:04:46.777887] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xebd370 is same with the state(5) to be set 00:24:13.918 [2024-12-01 15:04:46.777895] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xebd370 is same with the state(5) to be set 00:24:13.918 [2024-12-01 15:04:46.777903] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xebd370 is same with the state(5) to be set 00:24:13.918 [2024-12-01 15:04:46.777914] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xebd370 is same with the state(5) to be set 00:24:13.918 [2024-12-01 15:04:46.777922] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xebd370 is same with the state(5) to be set 00:24:13.918 [2024-12-01 15:04:46.777944] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xebd370 is same with the state(5) to be set 00:24:13.918 [2024-12-01 15:04:46.777953] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xebd370 is same with the state(5) to be set 00:24:13.918 [2024-12-01 15:04:46.777962] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xebd370 is same with the state(5) to be set 00:24:13.918 [2024-12-01 15:04:46.777970] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xebd370 is same with the state(5) to be set 00:24:13.918 [2024-12-01 15:04:46.777979] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xebd370 is same with the state(5) to be set 00:24:13.918 15:04:46 -- host/multipath.sh@101 -- # sleep 1 00:24:14.853 15:04:47 -- host/multipath.sh@104 -- # confirm_io_on_port non_optimized 4420 00:24:14.853 15:04:47 -- host/multipath.sh@65 -- # dtrace_pid=99884 00:24:14.853 15:04:47 -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 99032 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:24:14.854 15:04:47 -- host/multipath.sh@66 -- # sleep 6 00:24:21.416 15:04:53 -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:24:21.416 15:04:53 -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="non_optimized") | .address.trsvcid' 00:24:21.416 15:04:54 -- host/multipath.sh@67 -- # active_port=4420 00:24:21.416 15:04:54 -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:24:21.416 Attaching 4 probes... 00:24:21.416 @path[10.0.0.2, 4420]: 21084 00:24:21.416 @path[10.0.0.2, 4420]: 21509 00:24:21.416 @path[10.0.0.2, 4420]: 21611 00:24:21.416 @path[10.0.0.2, 4420]: 21646 00:24:21.416 @path[10.0.0.2, 4420]: 21563 00:24:21.416 15:04:54 -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:24:21.416 15:04:54 -- host/multipath.sh@69 -- # cut -d ']' -f1 00:24:21.416 15:04:54 -- host/multipath.sh@69 -- # sed -n 1p 00:24:21.416 15:04:54 -- host/multipath.sh@69 -- # port=4420 00:24:21.416 15:04:54 -- host/multipath.sh@70 -- # [[ 4420 == \4\4\2\0 ]] 00:24:21.416 15:04:54 -- host/multipath.sh@71 -- # [[ 4420 == \4\4\2\0 ]] 00:24:21.416 15:04:54 -- host/multipath.sh@72 -- # kill 99884 00:24:21.416 15:04:54 -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:24:21.416 15:04:54 -- host/multipath.sh@107 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:24:21.416 [2024-12-01 15:04:54.356026] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:24:21.416 15:04:54 -- host/multipath.sh@108 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:24:21.674 15:04:54 -- host/multipath.sh@111 -- # sleep 6 00:24:28.279 15:05:00 -- host/multipath.sh@112 -- # confirm_io_on_port optimized 4421 00:24:28.279 15:05:00 -- host/multipath.sh@65 -- # dtrace_pid=100077 00:24:28.279 15:05:00 -- host/multipath.sh@66 -- # sleep 6 00:24:28.279 15:05:00 -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 99032 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:24:33.546 15:05:06 -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:24:33.547 15:05:06 -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:24:33.805 15:05:06 -- host/multipath.sh@67 -- # active_port=4421 00:24:33.805 15:05:06 -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:24:33.805 Attaching 4 probes... 00:24:33.805 @path[10.0.0.2, 4421]: 20437 00:24:33.805 @path[10.0.0.2, 4421]: 20740 00:24:33.805 @path[10.0.0.2, 4421]: 20799 00:24:33.805 @path[10.0.0.2, 4421]: 20733 00:24:33.805 @path[10.0.0.2, 4421]: 20705 00:24:33.805 15:05:06 -- host/multipath.sh@69 -- # cut -d ']' -f1 00:24:33.805 15:05:06 -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:24:33.805 15:05:06 -- host/multipath.sh@69 -- # sed -n 1p 00:24:33.805 15:05:06 -- host/multipath.sh@69 -- # port=4421 00:24:33.805 15:05:06 -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:24:33.805 15:05:06 -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:24:33.805 15:05:06 -- host/multipath.sh@72 -- # kill 100077 00:24:33.805 15:05:06 -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:24:33.805 15:05:06 -- host/multipath.sh@114 -- # killprocess 99137 00:24:33.805 15:05:06 -- common/autotest_common.sh@936 -- # '[' -z 99137 ']' 00:24:33.805 15:05:06 -- common/autotest_common.sh@940 -- # kill -0 99137 00:24:33.805 15:05:06 -- common/autotest_common.sh@941 -- # uname 00:24:33.805 15:05:06 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:24:33.805 15:05:06 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 99137 00:24:33.805 killing process with pid 99137 00:24:33.805 15:05:06 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:24:33.805 15:05:06 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:24:33.805 15:05:06 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 99137' 00:24:33.805 15:05:06 -- common/autotest_common.sh@955 -- # kill 99137 00:24:33.805 15:05:06 -- common/autotest_common.sh@960 -- # wait 99137 00:24:34.064 Connection closed with partial response: 00:24:34.064 00:24:34.064 00:24:34.331 15:05:07 -- host/multipath.sh@116 -- # wait 99137 00:24:34.331 15:05:07 -- host/multipath.sh@118 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:24:34.331 [2024-12-01 15:04:09.632257] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:24:34.331 [2024-12-01 15:04:09.632338] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid99137 ] 00:24:34.331 [2024-12-01 15:04:09.766827] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:34.331 [2024-12-01 15:04:09.846778] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:24:34.331 Running I/O for 90 seconds... 00:24:34.331 [2024-12-01 15:04:19.898244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:21408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.331 [2024-12-01 15:04:19.898323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:24:34.331 [2024-12-01 15:04:19.898367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.331 [2024-12-01 15:04:19.898385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:24:34.331 [2024-12-01 15:04:19.898404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:21424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.331 [2024-12-01 15:04:19.898418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:24:34.331 [2024-12-01 15:04:19.898436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:21432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.331 [2024-12-01 15:04:19.898449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:24:34.331 [2024-12-01 15:04:19.898468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:21440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.331 [2024-12-01 15:04:19.898481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:24:34.331 [2024-12-01 15:04:19.898500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:21448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.331 [2024-12-01 15:04:19.898513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:24:34.331 [2024-12-01 15:04:19.898532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:21456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.331 [2024-12-01 15:04:19.898545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:24:34.331 [2024-12-01 15:04:19.898563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:21464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.331 [2024-12-01 15:04:19.898579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:24:34.331 [2024-12-01 15:04:19.898596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:21472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.331 [2024-12-01 15:04:19.898609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:24:34.331 [2024-12-01 15:04:19.898626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:21480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.331 [2024-12-01 15:04:19.898643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:24:34.331 [2024-12-01 15:04:19.898660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:21488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.331 [2024-12-01 15:04:19.898686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:24:34.331 [2024-12-01 15:04:19.898716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:21496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.331 [2024-12-01 15:04:19.898729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:24:34.331 [2024-12-01 15:04:19.898746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.331 [2024-12-01 15:04:19.898780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:24:34.331 [2024-12-01 15:04:19.898799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:21512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.331 [2024-12-01 15:04:19.898813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:24:34.331 [2024-12-01 15:04:19.898831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:21520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.331 [2024-12-01 15:04:19.898845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:34.331 [2024-12-01 15:04:19.898863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:21528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.331 [2024-12-01 15:04:19.898876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:34.331 [2024-12-01 15:04:19.898895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:21536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.331 [2024-12-01 15:04:19.898908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:24:34.331 [2024-12-01 15:04:19.899217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:21544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.332 [2024-12-01 15:04:19.899242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:24:34.332 [2024-12-01 15:04:19.899265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:21552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.332 [2024-12-01 15:04:19.899281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:24:34.332 [2024-12-01 15:04:19.899298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:21560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.332 [2024-12-01 15:04:19.899311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:24:34.332 [2024-12-01 15:04:19.899329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:21568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.332 [2024-12-01 15:04:19.899342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:24:34.332 [2024-12-01 15:04:19.899359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:21576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.332 [2024-12-01 15:04:19.899372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:24:34.332 [2024-12-01 15:04:19.899390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:21584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.332 [2024-12-01 15:04:19.899403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:24:34.332 [2024-12-01 15:04:19.899433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:20824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.332 [2024-12-01 15:04:19.899448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:24:34.332 [2024-12-01 15:04:19.899467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:20832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.332 [2024-12-01 15:04:19.899480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:24:34.332 [2024-12-01 15:04:19.899498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:20856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.332 [2024-12-01 15:04:19.899511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:24:34.332 [2024-12-01 15:04:19.899529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:20880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.332 [2024-12-01 15:04:19.899542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:24:34.332 [2024-12-01 15:04:19.899559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:20904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.332 [2024-12-01 15:04:19.899572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:24:34.332 [2024-12-01 15:04:19.899590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:20944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.332 [2024-12-01 15:04:19.899603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:24:34.332 [2024-12-01 15:04:19.899621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:20960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.332 [2024-12-01 15:04:19.899634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:24:34.332 [2024-12-01 15:04:19.899651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:20968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.332 [2024-12-01 15:04:19.899664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:24:34.332 [2024-12-01 15:04:19.899682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:20992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.332 [2024-12-01 15:04:19.899695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:24:34.332 [2024-12-01 15:04:19.899713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:21000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.332 [2024-12-01 15:04:19.899727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:24:34.332 [2024-12-01 15:04:19.899745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:21016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.332 [2024-12-01 15:04:19.899776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:24:34.332 [2024-12-01 15:04:19.899796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:21064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.332 [2024-12-01 15:04:19.899810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:24:34.332 [2024-12-01 15:04:19.899837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:21072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.332 [2024-12-01 15:04:19.899851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:24:34.332 [2024-12-01 15:04:19.899869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:21088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.332 [2024-12-01 15:04:19.899882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:24:34.332 [2024-12-01 15:04:19.899900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:21096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.332 [2024-12-01 15:04:19.899914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:24:34.332 [2024-12-01 15:04:19.899932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:21112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.332 [2024-12-01 15:04:19.899948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:24:34.332 [2024-12-01 15:04:19.899966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:21592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.332 [2024-12-01 15:04:19.899979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:24:34.332 [2024-12-01 15:04:19.899997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:21600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.332 [2024-12-01 15:04:19.900010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:24:34.332 [2024-12-01 15:04:19.900028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:21608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.332 [2024-12-01 15:04:19.900041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:24:34.332 [2024-12-01 15:04:19.900059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:21616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.332 [2024-12-01 15:04:19.900072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:24:34.332 [2024-12-01 15:04:19.900089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:21624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.332 [2024-12-01 15:04:19.900102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:24:34.332 [2024-12-01 15:04:19.900120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:21632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.332 [2024-12-01 15:04:19.900133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:24:34.332 [2024-12-01 15:04:19.900150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:21640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.332 [2024-12-01 15:04:19.900165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.332 [2024-12-01 15:04:19.900183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:21648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.332 [2024-12-01 15:04:19.900205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:34.332 [2024-12-01 15:04:19.900226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.332 [2024-12-01 15:04:19.900247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:34.332 [2024-12-01 15:04:19.900271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:21664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.332 [2024-12-01 15:04:19.900286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:24:34.332 [2024-12-01 15:04:19.900304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:21672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.332 [2024-12-01 15:04:19.900317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:24:34.332 [2024-12-01 15:04:19.900335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:21680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.332 [2024-12-01 15:04:19.900348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:24:34.332 [2024-12-01 15:04:19.900366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:21688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.332 [2024-12-01 15:04:19.900380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:24:34.332 [2024-12-01 15:04:19.905228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:21696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.332 [2024-12-01 15:04:19.905264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:24:34.332 [2024-12-01 15:04:19.905291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:21704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.332 [2024-12-01 15:04:19.905307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:24:34.332 [2024-12-01 15:04:19.905326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:21712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.332 [2024-12-01 15:04:19.905343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:24:34.332 [2024-12-01 15:04:19.905361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:21720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.333 [2024-12-01 15:04:19.905375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:24:34.333 [2024-12-01 15:04:19.905393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:21728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.333 [2024-12-01 15:04:19.905407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:24:34.333 [2024-12-01 15:04:19.905425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:21736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.333 [2024-12-01 15:04:19.905438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:24:34.333 [2024-12-01 15:04:19.905456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:21744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.333 [2024-12-01 15:04:19.905470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:24:34.333 [2024-12-01 15:04:19.905488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:21752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.333 [2024-12-01 15:04:19.905544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:24:34.333 [2024-12-01 15:04:19.905567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:21760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.333 [2024-12-01 15:04:19.905582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:24:34.333 [2024-12-01 15:04:19.905601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:21768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.333 [2024-12-01 15:04:19.905616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:24:34.333 [2024-12-01 15:04:19.905635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:21776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.333 [2024-12-01 15:04:19.905649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:24:34.333 [2024-12-01 15:04:19.905668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:21784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.333 [2024-12-01 15:04:19.905681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:24:34.333 [2024-12-01 15:04:19.905701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:21792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.333 [2024-12-01 15:04:19.905714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:24:34.333 [2024-12-01 15:04:19.905733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:21800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.333 [2024-12-01 15:04:19.905747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:24:34.333 [2024-12-01 15:04:19.905777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:21808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.333 [2024-12-01 15:04:19.905793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:24:34.333 [2024-12-01 15:04:19.905814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:21816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.333 [2024-12-01 15:04:19.905830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:24:34.333 [2024-12-01 15:04:19.905849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:21824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.333 [2024-12-01 15:04:19.905863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:24:34.333 [2024-12-01 15:04:19.905883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:21832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.333 [2024-12-01 15:04:19.905897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:24:34.333 [2024-12-01 15:04:19.905931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:21840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.333 [2024-12-01 15:04:19.905945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:24:34.333 [2024-12-01 15:04:19.908417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:21848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.333 [2024-12-01 15:04:19.908449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:24:34.333 [2024-12-01 15:04:19.908487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:21856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.333 [2024-12-01 15:04:19.908517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:24:34.333 [2024-12-01 15:04:19.908553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:21864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.333 [2024-12-01 15:04:19.908567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:24:34.333 [2024-12-01 15:04:19.908586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:21872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.333 [2024-12-01 15:04:19.908600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:24:34.333 [2024-12-01 15:04:19.908618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:21880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.333 [2024-12-01 15:04:19.908632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:24:34.333 [2024-12-01 15:04:19.908650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:21120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.333 [2024-12-01 15:04:19.908663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:24:34.333 [2024-12-01 15:04:19.908681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:21152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.333 [2024-12-01 15:04:19.908695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:24:34.333 [2024-12-01 15:04:19.908714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:21176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.333 [2024-12-01 15:04:19.908727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:34.333 [2024-12-01 15:04:19.908745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.333 [2024-12-01 15:04:19.908759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:34.333 [2024-12-01 15:04:19.908816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:21224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.333 [2024-12-01 15:04:19.908832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:24:34.333 [2024-12-01 15:04:19.908849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:21232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.333 [2024-12-01 15:04:19.908863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:24:34.333 [2024-12-01 15:04:19.908880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.333 [2024-12-01 15:04:19.908893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:24:34.333 [2024-12-01 15:04:19.908911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:21248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.333 [2024-12-01 15:04:19.908925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:24:34.333 [2024-12-01 15:04:19.908951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:21264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.333 [2024-12-01 15:04:19.908966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:24:34.333 [2024-12-01 15:04:19.908984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:21272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.333 [2024-12-01 15:04:19.908998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:24:34.333 [2024-12-01 15:04:19.909017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:21280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.333 [2024-12-01 15:04:19.909030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:24:34.333 [2024-12-01 15:04:19.909048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:21296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.333 [2024-12-01 15:04:19.909062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:24:34.333 [2024-12-01 15:04:19.909081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:21304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.333 [2024-12-01 15:04:19.909097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:24:34.333 [2024-12-01 15:04:19.909115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:21312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.333 [2024-12-01 15:04:19.909140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:24:34.333 [2024-12-01 15:04:19.909158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:21344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.333 [2024-12-01 15:04:19.909171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:24:34.333 [2024-12-01 15:04:19.909201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:21376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.333 [2024-12-01 15:04:19.909220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:24:34.333 [2024-12-01 15:04:19.909243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:21888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.333 [2024-12-01 15:04:19.909257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:24:34.333 [2024-12-01 15:04:19.909275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:21896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.334 [2024-12-01 15:04:19.909289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:24:34.334 [2024-12-01 15:04:19.909308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:21904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.334 [2024-12-01 15:04:19.909321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:24:34.334 [2024-12-01 15:04:19.909339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:21912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.334 [2024-12-01 15:04:19.909353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:24:34.334 [2024-12-01 15:04:19.909371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:21920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.334 [2024-12-01 15:04:19.909391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:24:34.334 [2024-12-01 15:04:19.909410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.334 [2024-12-01 15:04:19.909424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:24:34.334 [2024-12-01 15:04:19.909442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:21936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.334 [2024-12-01 15:04:19.909456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:24:34.334 [2024-12-01 15:04:19.909474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:21944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.334 [2024-12-01 15:04:19.909488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:24:34.334 [2024-12-01 15:04:19.909536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:21952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.334 [2024-12-01 15:04:19.909552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:24:34.334 [2024-12-01 15:04:19.909571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:21960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.334 [2024-12-01 15:04:19.909585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:24:34.334 [2024-12-01 15:04:19.909604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:21968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.334 [2024-12-01 15:04:19.909618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:24:34.334 [2024-12-01 15:04:19.909636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:21976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.334 [2024-12-01 15:04:19.909650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:24:34.334 [2024-12-01 15:04:19.912176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:21984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.334 [2024-12-01 15:04:19.912209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:24:34.334 [2024-12-01 15:04:19.912235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:21992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.334 [2024-12-01 15:04:19.912249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:24:34.334 [2024-12-01 15:04:19.912267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:22000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.334 [2024-12-01 15:04:19.912281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:24:34.334 [2024-12-01 15:04:19.912299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:22008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.334 [2024-12-01 15:04:19.912311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:24:34.334 [2024-12-01 15:04:19.912329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:22016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.334 [2024-12-01 15:04:19.912352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:24:34.334 [2024-12-01 15:04:19.912372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.334 [2024-12-01 15:04:19.912385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:24:34.334 [2024-12-01 15:04:19.912402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:22032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.334 [2024-12-01 15:04:19.912415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:34.334 [2024-12-01 15:04:19.912432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:22040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.334 [2024-12-01 15:04:19.912445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:34.334 [2024-12-01 15:04:19.912463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:22048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.334 [2024-12-01 15:04:19.912475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:24:34.334 [2024-12-01 15:04:19.912493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:22056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.334 [2024-12-01 15:04:19.912505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:24:34.334 [2024-12-01 15:04:19.912523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:22064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.334 [2024-12-01 15:04:19.912536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:24:34.334 [2024-12-01 15:04:19.912553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:22072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.334 [2024-12-01 15:04:19.912566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:24:34.334 [2024-12-01 15:04:19.912584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:22080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.334 [2024-12-01 15:04:19.912596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:24:34.334 [2024-12-01 15:04:19.912614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:22088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.334 [2024-12-01 15:04:19.912626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:24:34.334 [2024-12-01 15:04:19.912644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:22096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.334 [2024-12-01 15:04:19.912656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:24:34.334 [2024-12-01 15:04:19.912674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:22104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.334 [2024-12-01 15:04:19.912688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:24:34.334 [2024-12-01 15:04:19.912707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:22112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.334 [2024-12-01 15:04:19.912720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:24:34.334 [2024-12-01 15:04:19.912972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:22120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.334 [2024-12-01 15:04:19.912995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:24:34.334 [2024-12-01 15:04:19.913017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:22128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.334 [2024-12-01 15:04:19.913032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:24:34.334 [2024-12-01 15:04:19.913051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:22136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.334 [2024-12-01 15:04:19.913065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:24:34.334 [2024-12-01 15:04:19.913083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:22144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.334 [2024-12-01 15:04:19.913106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:24:34.334 [2024-12-01 15:04:19.913136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:22152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.334 [2024-12-01 15:04:19.913156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:24:34.334 [2024-12-01 15:04:19.913185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:22160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.334 [2024-12-01 15:04:19.913203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:24:34.334 [2024-12-01 15:04:19.913221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:22168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.334 [2024-12-01 15:04:19.913235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:24:34.334 [2024-12-01 15:04:26.451666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:8304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.334 [2024-12-01 15:04:26.451746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:24:34.334 [2024-12-01 15:04:26.451822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:8312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.334 [2024-12-01 15:04:26.451842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:24:34.334 [2024-12-01 15:04:26.451863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:8320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.334 [2024-12-01 15:04:26.451880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:24:34.334 [2024-12-01 15:04:26.451900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:8328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.334 [2024-12-01 15:04:26.451923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:24:34.335 [2024-12-01 15:04:26.451941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:8336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.335 [2024-12-01 15:04:26.451955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:24:34.335 [2024-12-01 15:04:26.452009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:8344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.335 [2024-12-01 15:04:26.452025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:24:34.335 [2024-12-01 15:04:26.452045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:8352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.335 [2024-12-01 15:04:26.452059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:24:34.335 [2024-12-01 15:04:26.452078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:8360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.335 [2024-12-01 15:04:26.452101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:24:34.335 [2024-12-01 15:04:26.452120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:8368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.335 [2024-12-01 15:04:26.452135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.335 [2024-12-01 15:04:26.452154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:8376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.335 [2024-12-01 15:04:26.452168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:34.335 [2024-12-01 15:04:26.452187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:7752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.335 [2024-12-01 15:04:26.452201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:34.335 [2024-12-01 15:04:26.452220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:7760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.335 [2024-12-01 15:04:26.452233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:24:34.335 [2024-12-01 15:04:26.452252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:7768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.335 [2024-12-01 15:04:26.452266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:24:34.335 [2024-12-01 15:04:26.452285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:7776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.335 [2024-12-01 15:04:26.452298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:24:34.335 [2024-12-01 15:04:26.452316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:7784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.335 [2024-12-01 15:04:26.452329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:24:34.335 [2024-12-01 15:04:26.452347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:7792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.335 [2024-12-01 15:04:26.452360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:24:34.335 [2024-12-01 15:04:26.452380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:7808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.335 [2024-12-01 15:04:26.452393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:24:34.335 [2024-12-01 15:04:26.452412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:7832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.335 [2024-12-01 15:04:26.452434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:24:34.335 [2024-12-01 15:04:26.452454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:8384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.335 [2024-12-01 15:04:26.452467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:24:34.335 [2024-12-01 15:04:26.452486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:8392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.335 [2024-12-01 15:04:26.452500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:24:34.335 [2024-12-01 15:04:26.452987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.335 [2024-12-01 15:04:26.453014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:24:34.335 [2024-12-01 15:04:26.453040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:8408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.335 [2024-12-01 15:04:26.453056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:24:34.335 [2024-12-01 15:04:26.453077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:8416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.335 [2024-12-01 15:04:26.453091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:24:34.335 [2024-12-01 15:04:26.453114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:8424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.335 [2024-12-01 15:04:26.453128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:24:34.335 [2024-12-01 15:04:26.453148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:8432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.335 [2024-12-01 15:04:26.453163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:24:34.335 [2024-12-01 15:04:26.453186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:8440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.335 [2024-12-01 15:04:26.453200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:24:34.335 [2024-12-01 15:04:26.453220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:8448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.335 [2024-12-01 15:04:26.453234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:24:34.335 [2024-12-01 15:04:26.453255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:7848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.335 [2024-12-01 15:04:26.453269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:24:34.335 [2024-12-01 15:04:26.453290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:7872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.335 [2024-12-01 15:04:26.453303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:24:34.335 [2024-12-01 15:04:26.453325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:7888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.335 [2024-12-01 15:04:26.453349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:24:34.335 [2024-12-01 15:04:26.453371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:7904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.335 [2024-12-01 15:04:26.453387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:24:34.335 [2024-12-01 15:04:26.453408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:7936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.335 [2024-12-01 15:04:26.453422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:24:34.335 [2024-12-01 15:04:26.453442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:7984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.335 [2024-12-01 15:04:26.453458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:24:34.335 [2024-12-01 15:04:26.453480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:7992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.335 [2024-12-01 15:04:26.453494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:24:34.335 [2024-12-01 15:04:26.453553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:8000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.335 [2024-12-01 15:04:26.453570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:24:34.335 [2024-12-01 15:04:26.453591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:8456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.335 [2024-12-01 15:04:26.453605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:24:34.335 [2024-12-01 15:04:26.453627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:8464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.335 [2024-12-01 15:04:26.453642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:24:34.335 [2024-12-01 15:04:26.453664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:8472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.335 [2024-12-01 15:04:26.453678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:24:34.335 [2024-12-01 15:04:26.453700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:8480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.336 [2024-12-01 15:04:26.453714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:24:34.336 [2024-12-01 15:04:26.453735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:8488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.336 [2024-12-01 15:04:26.453749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:24:34.336 [2024-12-01 15:04:26.453770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:8496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.336 [2024-12-01 15:04:26.453798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:24:34.336 [2024-12-01 15:04:26.453821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:8504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.336 [2024-12-01 15:04:26.453836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:34.336 [2024-12-01 15:04:26.453867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:8512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.336 [2024-12-01 15:04:26.453882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:34.336 [2024-12-01 15:04:26.453903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:8520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.336 [2024-12-01 15:04:26.453933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:24:34.336 [2024-12-01 15:04:26.453954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:8528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.336 [2024-12-01 15:04:26.453967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:24:34.336 [2024-12-01 15:04:26.453988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:8536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.336 [2024-12-01 15:04:26.454002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:24:34.336 [2024-12-01 15:04:26.454024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:8544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.336 [2024-12-01 15:04:26.454037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:24:34.336 [2024-12-01 15:04:26.454058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:8552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.336 [2024-12-01 15:04:26.454071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:24:34.336 [2024-12-01 15:04:26.454092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:8560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.336 [2024-12-01 15:04:26.454106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:24:34.336 [2024-12-01 15:04:26.454127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:8568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.336 [2024-12-01 15:04:26.454142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:24:34.336 [2024-12-01 15:04:26.454163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:8576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.336 [2024-12-01 15:04:26.454177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:24:34.336 [2024-12-01 15:04:26.454198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:8584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.336 [2024-12-01 15:04:26.454212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:24:34.336 [2024-12-01 15:04:26.454232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:8592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.336 [2024-12-01 15:04:26.454246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:24:34.336 [2024-12-01 15:04:26.454267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:8600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.336 [2024-12-01 15:04:26.454281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:24:34.336 [2024-12-01 15:04:26.454308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:8608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.336 [2024-12-01 15:04:26.454322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:24:34.336 [2024-12-01 15:04:26.454343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:8048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.336 [2024-12-01 15:04:26.454357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:24:34.336 [2024-12-01 15:04:26.454378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:8088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.336 [2024-12-01 15:04:26.454391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:24:34.336 [2024-12-01 15:04:26.454411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:8096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.336 [2024-12-01 15:04:26.454425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:24:34.336 [2024-12-01 15:04:26.454447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:8104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.336 [2024-12-01 15:04:26.454461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:24:34.336 [2024-12-01 15:04:26.454482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:8128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.336 [2024-12-01 15:04:26.454495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:24:34.336 [2024-12-01 15:04:26.454516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:8136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.336 [2024-12-01 15:04:26.454530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:24:34.336 [2024-12-01 15:04:26.454558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:8144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.336 [2024-12-01 15:04:26.454573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:24:34.336 [2024-12-01 15:04:26.454594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:8152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.336 [2024-12-01 15:04:26.454607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:24:34.336 [2024-12-01 15:04:26.454628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:8616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.336 [2024-12-01 15:04:26.454641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:24:34.336 [2024-12-01 15:04:26.454662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:8624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.336 [2024-12-01 15:04:26.454676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:24:34.336 [2024-12-01 15:04:26.454872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:8632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.336 [2024-12-01 15:04:26.454897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:24:34.336 [2024-12-01 15:04:26.454925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:8640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.336 [2024-12-01 15:04:26.454951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:24:34.336 [2024-12-01 15:04:26.454978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:8648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.336 [2024-12-01 15:04:26.454992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:24:34.336 [2024-12-01 15:04:26.455016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.336 [2024-12-01 15:04:26.455031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:24:34.336 [2024-12-01 15:04:26.455054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:8664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.336 [2024-12-01 15:04:26.455068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:24:34.336 [2024-12-01 15:04:26.455093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:8672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.336 [2024-12-01 15:04:26.455115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:24:34.336 [2024-12-01 15:04:26.455139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:8680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.336 [2024-12-01 15:04:26.455153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:24:34.336 [2024-12-01 15:04:26.455177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:8688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.336 [2024-12-01 15:04:26.455201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:24:34.336 [2024-12-01 15:04:26.455226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:8696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.336 [2024-12-01 15:04:26.455239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:34.336 [2024-12-01 15:04:26.455263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:8704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.336 [2024-12-01 15:04:26.455277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:34.336 [2024-12-01 15:04:26.455300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:8712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.337 [2024-12-01 15:04:26.455313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:24:34.337 [2024-12-01 15:04:26.455336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:8720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.337 [2024-12-01 15:04:26.455350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:24:34.337 [2024-12-01 15:04:26.455380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:8728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.337 [2024-12-01 15:04:26.455394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:24:34.337 [2024-12-01 15:04:26.455417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:8736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.337 [2024-12-01 15:04:26.455437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:24:34.337 [2024-12-01 15:04:26.455462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:8744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.337 [2024-12-01 15:04:26.455476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:24:34.337 [2024-12-01 15:04:26.455500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:8752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.337 [2024-12-01 15:04:26.455514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:24:34.337 [2024-12-01 15:04:26.455537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:8760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.337 [2024-12-01 15:04:26.455551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:24:34.337 [2024-12-01 15:04:26.455574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:8768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.337 [2024-12-01 15:04:26.455588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:24:34.337 [2024-12-01 15:04:26.455611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:8776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.337 [2024-12-01 15:04:26.455625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:24:34.337 [2024-12-01 15:04:26.455649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:8784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.337 [2024-12-01 15:04:26.455662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:24:34.337 [2024-12-01 15:04:26.455686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:8792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.337 [2024-12-01 15:04:26.455700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:24:34.337 [2024-12-01 15:04:26.455723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:8800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.337 [2024-12-01 15:04:26.455738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:24:34.337 [2024-12-01 15:04:26.455775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:8808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.337 [2024-12-01 15:04:26.455800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:24:34.337 [2024-12-01 15:04:26.455824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:8184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.337 [2024-12-01 15:04:26.455838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:24:34.337 [2024-12-01 15:04:26.455862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:8192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.337 [2024-12-01 15:04:26.455875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:24:34.337 [2024-12-01 15:04:26.455899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:8208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.337 [2024-12-01 15:04:26.455912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:24:34.337 [2024-12-01 15:04:26.455944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:8216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.337 [2024-12-01 15:04:26.455959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:24:34.337 [2024-12-01 15:04:26.455983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:8240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.337 [2024-12-01 15:04:26.455998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:24:34.337 [2024-12-01 15:04:26.456026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:8248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.337 [2024-12-01 15:04:26.456041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:24:34.337 [2024-12-01 15:04:26.456064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:8256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.337 [2024-12-01 15:04:26.456077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:24:34.337 [2024-12-01 15:04:26.456101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:8288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.337 [2024-12-01 15:04:26.456115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:24:34.337 [2024-12-01 15:04:26.456141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:8816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.337 [2024-12-01 15:04:26.456163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:24:34.337 [2024-12-01 15:04:26.456187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:8824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.337 [2024-12-01 15:04:26.456211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:24:34.337 [2024-12-01 15:04:26.456235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:8832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.337 [2024-12-01 15:04:26.456249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:24:34.337 [2024-12-01 15:04:26.456272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:8840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.337 [2024-12-01 15:04:26.456286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:24:34.337 [2024-12-01 15:04:26.456309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:8848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.337 [2024-12-01 15:04:26.456323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:24:34.337 [2024-12-01 15:04:26.456347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:8856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.337 [2024-12-01 15:04:26.456361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:24:34.337 [2024-12-01 15:04:26.456384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:8864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.337 [2024-12-01 15:04:26.456398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:24:34.337 [2024-12-01 15:04:26.456424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:8872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.337 [2024-12-01 15:04:26.456441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:24:34.337 [2024-12-01 15:04:26.456466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:8880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.337 [2024-12-01 15:04:26.456480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:24:34.337 [2024-12-01 15:04:26.456504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:8888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.337 [2024-12-01 15:04:26.456518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:34.337 [2024-12-01 15:04:26.456542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:8896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.337 [2024-12-01 15:04:26.456556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:34.337 [2024-12-01 15:04:26.456579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:8904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.337 [2024-12-01 15:04:26.456593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:24:34.337 [2024-12-01 15:04:26.456617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:8912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.337 [2024-12-01 15:04:26.456631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:24:34.337 [2024-12-01 15:04:26.456659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:8920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.337 [2024-12-01 15:04:26.456674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:24:34.337 [2024-12-01 15:04:26.456699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:8928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.337 [2024-12-01 15:04:26.456713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:24:34.337 [2024-12-01 15:04:26.456736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:8936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.337 [2024-12-01 15:04:26.456761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:24:34.337 [2024-12-01 15:04:26.456788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:8944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.337 [2024-12-01 15:04:26.456802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:24:34.337 [2024-12-01 15:04:33.447851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:24456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.337 [2024-12-01 15:04:33.447898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:24:34.337 [2024-12-01 15:04:33.447937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:24464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.338 [2024-12-01 15:04:33.447956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:24:34.338 [2024-12-01 15:04:33.447975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:24472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.338 [2024-12-01 15:04:33.448016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:24:34.338 [2024-12-01 15:04:33.448037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:24480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.338 [2024-12-01 15:04:33.448050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:34.338 [2024-12-01 15:04:33.448067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:24488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.338 [2024-12-01 15:04:33.448080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:34.338 [2024-12-01 15:04:33.448097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:24496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.338 [2024-12-01 15:04:33.448110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:24:34.338 [2024-12-01 15:04:33.448129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:24504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.338 [2024-12-01 15:04:33.448150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:24:34.338 [2024-12-01 15:04:33.448167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:24512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.338 [2024-12-01 15:04:33.448180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:24:34.338 [2024-12-01 15:04:33.448202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:24520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.338 [2024-12-01 15:04:33.448216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:24:34.338 [2024-12-01 15:04:33.448233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:24528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.338 [2024-12-01 15:04:33.448246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:24:34.338 [2024-12-01 15:04:33.448262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:24536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.338 [2024-12-01 15:04:33.448275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:24:34.338 [2024-12-01 15:04:33.448294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:24544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.338 [2024-12-01 15:04:33.448306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:24:34.338 [2024-12-01 15:04:33.448324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:24552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.338 [2024-12-01 15:04:33.448337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:24:34.338 [2024-12-01 15:04:33.448354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:24560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.338 [2024-12-01 15:04:33.448367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:24:34.338 [2024-12-01 15:04:33.448384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:24568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.338 [2024-12-01 15:04:33.448405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:24:34.338 [2024-12-01 15:04:33.448920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.338 [2024-12-01 15:04:33.448946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:24:34.338 [2024-12-01 15:04:33.448972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:24584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.338 [2024-12-01 15:04:33.448987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:24:34.338 [2024-12-01 15:04:33.449009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:24592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.338 [2024-12-01 15:04:33.449023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:24:34.338 [2024-12-01 15:04:33.449043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:24600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.338 [2024-12-01 15:04:33.449058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:24:34.338 [2024-12-01 15:04:33.449078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:24608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.338 [2024-12-01 15:04:33.449091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:24:34.338 [2024-12-01 15:04:33.449115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.338 [2024-12-01 15:04:33.449128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:24:34.338 [2024-12-01 15:04:33.449148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:24624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.338 [2024-12-01 15:04:33.449160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:24:34.338 [2024-12-01 15:04:33.449180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:24632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.338 [2024-12-01 15:04:33.449194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:24:34.338 [2024-12-01 15:04:33.449213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:24640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.338 [2024-12-01 15:04:33.449227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:24:34.338 [2024-12-01 15:04:33.449247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:24648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.338 [2024-12-01 15:04:33.449259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:24:34.338 [2024-12-01 15:04:33.449279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:24656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.338 [2024-12-01 15:04:33.449292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:24:34.338 [2024-12-01 15:04:33.449312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:24664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.338 [2024-12-01 15:04:33.449325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:24:34.338 [2024-12-01 15:04:33.449355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:24672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.338 [2024-12-01 15:04:33.449369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:24:34.338 [2024-12-01 15:04:33.449390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:24680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.338 [2024-12-01 15:04:33.449403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:24:34.338 [2024-12-01 15:04:33.449495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:24688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.338 [2024-12-01 15:04:33.449553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:24:34.338 [2024-12-01 15:04:33.449582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:24696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.338 [2024-12-01 15:04:33.449597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:24:34.338 [2024-12-01 15:04:33.449620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:24704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.338 [2024-12-01 15:04:33.449634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:24:34.338 [2024-12-01 15:04:33.449656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:24712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.338 [2024-12-01 15:04:33.449670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:24:34.338 [2024-12-01 15:04:33.449692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:24720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.338 [2024-12-01 15:04:33.449706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:24:34.338 [2024-12-01 15:04:33.449727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:24728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.338 [2024-12-01 15:04:33.449741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.338 [2024-12-01 15:04:33.449763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:24736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.338 [2024-12-01 15:04:33.449793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:34.338 [2024-12-01 15:04:33.449817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:24744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.338 [2024-12-01 15:04:33.449831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:34.338 [2024-12-01 15:04:33.449853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:24752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.338 [2024-12-01 15:04:33.449867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:24:34.338 [2024-12-01 15:04:33.449889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:24056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.338 [2024-12-01 15:04:33.449904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:24:34.338 [2024-12-01 15:04:33.449953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.338 [2024-12-01 15:04:33.449969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:24:34.338 [2024-12-01 15:04:33.449989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:24080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.339 [2024-12-01 15:04:33.450002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:24:34.339 [2024-12-01 15:04:33.450022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:24088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.339 [2024-12-01 15:04:33.450035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:24:34.339 [2024-12-01 15:04:33.450056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:24096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.339 [2024-12-01 15:04:33.450070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:24:34.339 [2024-12-01 15:04:33.450090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:24120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.339 [2024-12-01 15:04:33.450103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:24:34.339 [2024-12-01 15:04:33.450124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:24144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.339 [2024-12-01 15:04:33.450137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:24:34.339 [2024-12-01 15:04:33.450157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:24160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.339 [2024-12-01 15:04:33.450170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:24:34.339 [2024-12-01 15:04:33.450191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.339 [2024-12-01 15:04:33.450204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:24:34.339 [2024-12-01 15:04:33.450225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:24216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.339 [2024-12-01 15:04:33.450237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:24:34.339 [2024-12-01 15:04:33.450258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:24240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.339 [2024-12-01 15:04:33.450271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:24:34.339 [2024-12-01 15:04:33.450292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:24256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.339 [2024-12-01 15:04:33.450305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:24:34.339 [2024-12-01 15:04:33.450326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:24288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.339 [2024-12-01 15:04:33.450339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:24:34.339 [2024-12-01 15:04:33.450359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:24296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.339 [2024-12-01 15:04:33.450378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:24:34.339 [2024-12-01 15:04:33.450399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:24304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.339 [2024-12-01 15:04:33.450413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:24:34.339 [2024-12-01 15:04:33.450434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.339 [2024-12-01 15:04:33.450447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:24:34.339 [2024-12-01 15:04:33.450468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:24760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.339 [2024-12-01 15:04:33.450481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:24:34.339 [2024-12-01 15:04:33.450503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:24768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.339 [2024-12-01 15:04:33.450516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:24:34.339 [2024-12-01 15:04:33.450538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:24776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.339 [2024-12-01 15:04:33.450551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:24:34.339 [2024-12-01 15:04:33.450572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:24784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.339 [2024-12-01 15:04:33.450584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:24:34.339 [2024-12-01 15:04:33.450605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:24792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.339 [2024-12-01 15:04:33.450618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:24:34.339 [2024-12-01 15:04:33.450639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:24800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.339 [2024-12-01 15:04:33.450652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:24:34.339 [2024-12-01 15:04:33.450673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:24808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.339 [2024-12-01 15:04:33.450687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:24:34.339 [2024-12-01 15:04:33.450707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:24816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.339 [2024-12-01 15:04:33.450720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:24:34.339 [2024-12-01 15:04:33.450740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:24824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.339 [2024-12-01 15:04:33.450754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:24:34.339 [2024-12-01 15:04:33.450793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:24832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.339 [2024-12-01 15:04:33.450820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:24:34.339 [2024-12-01 15:04:33.450842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:24840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.339 [2024-12-01 15:04:33.450855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:24:34.339 [2024-12-01 15:04:33.450877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:24848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.339 [2024-12-01 15:04:33.450890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:24:34.339 [2024-12-01 15:04:33.450911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:24856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.339 [2024-12-01 15:04:33.450924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:24:34.339 [2024-12-01 15:04:33.450944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:24864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.339 [2024-12-01 15:04:33.450957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:34.339 [2024-12-01 15:04:33.450978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:24872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.339 [2024-12-01 15:04:33.450991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:34.339 [2024-12-01 15:04:33.451012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:24880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.339 [2024-12-01 15:04:33.451025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:24:34.339 [2024-12-01 15:04:33.451046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:24888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.339 [2024-12-01 15:04:33.451060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:24:34.339 [2024-12-01 15:04:33.451081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:24896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.339 [2024-12-01 15:04:33.451095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:24:34.339 [2024-12-01 15:04:33.451116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:24904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.339 [2024-12-01 15:04:33.451129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:24:34.339 [2024-12-01 15:04:33.451150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:24912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.339 [2024-12-01 15:04:33.451163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:24:34.339 [2024-12-01 15:04:33.451185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:24920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.339 [2024-12-01 15:04:33.451197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:24:34.339 [2024-12-01 15:04:33.451218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:24928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.339 [2024-12-01 15:04:33.451232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:24:34.339 [2024-12-01 15:04:33.451258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:24936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.339 [2024-12-01 15:04:33.451271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:24:34.339 [2024-12-01 15:04:33.451293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:24944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.339 [2024-12-01 15:04:33.451307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:24:34.339 [2024-12-01 15:04:33.451428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:24952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.339 [2024-12-01 15:04:33.451448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:24:34.339 [2024-12-01 15:04:33.451474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:24960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.340 [2024-12-01 15:04:33.451489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:24:34.340 [2024-12-01 15:04:33.451513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:24968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.340 [2024-12-01 15:04:33.451526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:24:34.340 [2024-12-01 15:04:33.451550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:24976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.340 [2024-12-01 15:04:33.451563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:24:34.340 [2024-12-01 15:04:33.451586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:24984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.340 [2024-12-01 15:04:33.451600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:24:34.340 [2024-12-01 15:04:33.451623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:24992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.340 [2024-12-01 15:04:33.451636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:24:34.340 [2024-12-01 15:04:33.451661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:25000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.340 [2024-12-01 15:04:33.451674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:24:34.340 [2024-12-01 15:04:33.451697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:25008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.340 [2024-12-01 15:04:33.451712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:24:34.340 [2024-12-01 15:04:33.451736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:25016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.340 [2024-12-01 15:04:33.451762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:24:34.340 [2024-12-01 15:04:33.451790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:25024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.340 [2024-12-01 15:04:33.451804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:24:34.340 [2024-12-01 15:04:33.451837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:25032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.340 [2024-12-01 15:04:33.451852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:24:34.340 [2024-12-01 15:04:33.451875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:25040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.340 [2024-12-01 15:04:33.451888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:24:34.340 [2024-12-01 15:04:33.451913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:25048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.340 [2024-12-01 15:04:33.451927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:24:34.340 [2024-12-01 15:04:33.451950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:25056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.340 [2024-12-01 15:04:33.451963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:24:34.340 [2024-12-01 15:04:33.451996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:25064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.340 [2024-12-01 15:04:33.452010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:24:34.340 [2024-12-01 15:04:33.452034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.340 [2024-12-01 15:04:33.452047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:24:34.340 [2024-12-01 15:04:33.452070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:25080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.340 [2024-12-01 15:04:33.452083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:24:34.340 [2024-12-01 15:04:33.452107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:25088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.340 [2024-12-01 15:04:33.452120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:24:34.340 [2024-12-01 15:04:33.452148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:25096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.340 [2024-12-01 15:04:33.452162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:24:34.340 [2024-12-01 15:04:33.452185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:24336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.340 [2024-12-01 15:04:33.452199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:24:34.340 [2024-12-01 15:04:33.452222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:24344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.340 [2024-12-01 15:04:33.452235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:24:34.340 [2024-12-01 15:04:33.452258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:24352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.340 [2024-12-01 15:04:33.452271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:34.340 [2024-12-01 15:04:33.452295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:24360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.340 [2024-12-01 15:04:33.452314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:34.340 [2024-12-01 15:04:33.452338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:24400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.340 [2024-12-01 15:04:33.452352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:24:34.340 [2024-12-01 15:04:33.452375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.340 [2024-12-01 15:04:33.452388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:24:34.340 [2024-12-01 15:04:33.452412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:24440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.340 [2024-12-01 15:04:33.452425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:24:34.340 [2024-12-01 15:04:33.452448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:24448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.340 [2024-12-01 15:04:33.452461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:24:34.340 [2024-12-01 15:04:46.774837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:102664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.340 [2024-12-01 15:04:46.774886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:24:34.340 [2024-12-01 15:04:46.774944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:102672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.340 [2024-12-01 15:04:46.774970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:24:34.340 [2024-12-01 15:04:46.774995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:102680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.340 [2024-12-01 15:04:46.775009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:24:34.340 [2024-12-01 15:04:46.775028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:102688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.340 [2024-12-01 15:04:46.775042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:24:34.340 [2024-12-01 15:04:46.775061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:102696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.340 [2024-12-01 15:04:46.775074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:34.340 [2024-12-01 15:04:46.775093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:102704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.340 [2024-12-01 15:04:46.775114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:34.340 [2024-12-01 15:04:46.775148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:102712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.340 [2024-12-01 15:04:46.775161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:24:34.340 [2024-12-01 15:04:46.775187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:102720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.340 [2024-12-01 15:04:46.775219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:24:34.340 [2024-12-01 15:04:46.775239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:102728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.341 [2024-12-01 15:04:46.775251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:24:34.341 [2024-12-01 15:04:46.775268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:102736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.341 [2024-12-01 15:04:46.775282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:24:34.341 [2024-12-01 15:04:46.775298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:102744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.341 [2024-12-01 15:04:46.775312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:24:34.341 [2024-12-01 15:04:46.775330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:102048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.341 [2024-12-01 15:04:46.775343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:24:34.341 [2024-12-01 15:04:46.775361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:102056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.341 [2024-12-01 15:04:46.775373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:24:34.341 [2024-12-01 15:04:46.775391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:102064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.341 [2024-12-01 15:04:46.775403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:24:34.341 [2024-12-01 15:04:46.775420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:102072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.341 [2024-12-01 15:04:46.775432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:24:34.341 [2024-12-01 15:04:46.775449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:102096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.341 [2024-12-01 15:04:46.775461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:24:34.341 [2024-12-01 15:04:46.775487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:102104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.341 [2024-12-01 15:04:46.775500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:24:34.341 [2024-12-01 15:04:46.775519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:102112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.341 [2024-12-01 15:04:46.775532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:24:34.341 [2024-12-01 15:04:46.775549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:102128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.341 [2024-12-01 15:04:46.775561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:24:34.341 [2024-12-01 15:04:46.775579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:102752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.341 [2024-12-01 15:04:46.775598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:24:34.341 [2024-12-01 15:04:46.778710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:102760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.341 [2024-12-01 15:04:46.778739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.341 [2024-12-01 15:04:46.778757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:102768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.341 [2024-12-01 15:04:46.778800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.341 [2024-12-01 15:04:46.778830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:102776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.341 [2024-12-01 15:04:46.778844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.341 [2024-12-01 15:04:46.778858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:102784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.341 [2024-12-01 15:04:46.778871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.341 [2024-12-01 15:04:46.778885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:102792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.341 [2024-12-01 15:04:46.778896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.341 [2024-12-01 15:04:46.778910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:102160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.341 [2024-12-01 15:04:46.778922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.341 [2024-12-01 15:04:46.778935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:102168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.341 [2024-12-01 15:04:46.778947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.341 [2024-12-01 15:04:46.778961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:102176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.341 [2024-12-01 15:04:46.778973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.341 [2024-12-01 15:04:46.778986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:102184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.341 [2024-12-01 15:04:46.778999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.341 [2024-12-01 15:04:46.779013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:102192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.341 [2024-12-01 15:04:46.779024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.341 [2024-12-01 15:04:46.779038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:102200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.341 [2024-12-01 15:04:46.779050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.341 [2024-12-01 15:04:46.779063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:102208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.341 [2024-12-01 15:04:46.779076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.341 [2024-12-01 15:04:46.779106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:102240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.341 [2024-12-01 15:04:46.779127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.341 [2024-12-01 15:04:46.779141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:102248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.341 [2024-12-01 15:04:46.779169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.341 [2024-12-01 15:04:46.779182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:102256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.341 [2024-12-01 15:04:46.779194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.341 [2024-12-01 15:04:46.779207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:102264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.341 [2024-12-01 15:04:46.779219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.341 [2024-12-01 15:04:46.779233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:102272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.341 [2024-12-01 15:04:46.779244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.341 [2024-12-01 15:04:46.779257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:102296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.341 [2024-12-01 15:04:46.779269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.341 [2024-12-01 15:04:46.779281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:102304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.341 [2024-12-01 15:04:46.779293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.341 [2024-12-01 15:04:46.779305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:102352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.341 [2024-12-01 15:04:46.779317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.341 [2024-12-01 15:04:46.779330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:102360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.341 [2024-12-01 15:04:46.779342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.341 [2024-12-01 15:04:46.779354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:102800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.341 [2024-12-01 15:04:46.779366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.341 [2024-12-01 15:04:46.779379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:102808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.341 [2024-12-01 15:04:46.779390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.341 [2024-12-01 15:04:46.779403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:102816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.341 [2024-12-01 15:04:46.779415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.341 [2024-12-01 15:04:46.779427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:102824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.341 [2024-12-01 15:04:46.779439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.341 [2024-12-01 15:04:46.779457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:102832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.341 [2024-12-01 15:04:46.779470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.341 [2024-12-01 15:04:46.779483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:102840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.341 [2024-12-01 15:04:46.779495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.341 [2024-12-01 15:04:46.779507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:102848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.341 [2024-12-01 15:04:46.779519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.342 [2024-12-01 15:04:46.779532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:102856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.342 [2024-12-01 15:04:46.779543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.342 [2024-12-01 15:04:46.779555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:102864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.342 [2024-12-01 15:04:46.779567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.342 [2024-12-01 15:04:46.779580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:102872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.342 [2024-12-01 15:04:46.779592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.342 [2024-12-01 15:04:46.779605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:102880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.342 [2024-12-01 15:04:46.779617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.342 [2024-12-01 15:04:46.779629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:102888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.342 [2024-12-01 15:04:46.779641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.342 [2024-12-01 15:04:46.779654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:102896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.342 [2024-12-01 15:04:46.779665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.342 [2024-12-01 15:04:46.779678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:102904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.342 [2024-12-01 15:04:46.779690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.342 [2024-12-01 15:04:46.779702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:102912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.342 [2024-12-01 15:04:46.779715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.342 [2024-12-01 15:04:46.779728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:102920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.342 [2024-12-01 15:04:46.779739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.342 [2024-12-01 15:04:46.779752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:102928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.342 [2024-12-01 15:04:46.779810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.342 [2024-12-01 15:04:46.779826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:102936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.342 [2024-12-01 15:04:46.779840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.342 [2024-12-01 15:04:46.779855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:102944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.342 [2024-12-01 15:04:46.779867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.342 [2024-12-01 15:04:46.779881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:102952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.342 [2024-12-01 15:04:46.779893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.342 [2024-12-01 15:04:46.779906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:102960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.342 [2024-12-01 15:04:46.779919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.342 [2024-12-01 15:04:46.779933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:102968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.342 [2024-12-01 15:04:46.779945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.342 [2024-12-01 15:04:46.779958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:102976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.342 [2024-12-01 15:04:46.779970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.342 [2024-12-01 15:04:46.779983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:102984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.342 [2024-12-01 15:04:46.779996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.342 [2024-12-01 15:04:46.780010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:102992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.342 [2024-12-01 15:04:46.780022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.342 [2024-12-01 15:04:46.780035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:103000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.342 [2024-12-01 15:04:46.780047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.342 [2024-12-01 15:04:46.780061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:103008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.342 [2024-12-01 15:04:46.780074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.342 [2024-12-01 15:04:46.780087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:102376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.342 [2024-12-01 15:04:46.780100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.342 [2024-12-01 15:04:46.780113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:102400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.342 [2024-12-01 15:04:46.780156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.342 [2024-12-01 15:04:46.780181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:102424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.342 [2024-12-01 15:04:46.780193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.342 [2024-12-01 15:04:46.780207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:102432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.342 [2024-12-01 15:04:46.780224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.342 [2024-12-01 15:04:46.780237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:102448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.342 [2024-12-01 15:04:46.780249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.342 [2024-12-01 15:04:46.780262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:102464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.342 [2024-12-01 15:04:46.780274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.342 [2024-12-01 15:04:46.780287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:102472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.342 [2024-12-01 15:04:46.780298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.342 [2024-12-01 15:04:46.780311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:102496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.342 [2024-12-01 15:04:46.780322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.342 [2024-12-01 15:04:46.780335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:102512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.342 [2024-12-01 15:04:46.780346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.342 [2024-12-01 15:04:46.780358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:102520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.342 [2024-12-01 15:04:46.780370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.342 [2024-12-01 15:04:46.780383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:102528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.342 [2024-12-01 15:04:46.780395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.342 [2024-12-01 15:04:46.780407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:102552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.342 [2024-12-01 15:04:46.780419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.342 [2024-12-01 15:04:46.780432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:102584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.342 [2024-12-01 15:04:46.780443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.342 [2024-12-01 15:04:46.780456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:102624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.342 [2024-12-01 15:04:46.780468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.342 [2024-12-01 15:04:46.780480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:102640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.342 [2024-12-01 15:04:46.780496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.342 [2024-12-01 15:04:46.780510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:102656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.342 [2024-12-01 15:04:46.780522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.342 [2024-12-01 15:04:46.780534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:103016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.342 [2024-12-01 15:04:46.780546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.342 [2024-12-01 15:04:46.780558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:103024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.342 [2024-12-01 15:04:46.780570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.342 [2024-12-01 15:04:46.780583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:103032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.342 [2024-12-01 15:04:46.780594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.342 [2024-12-01 15:04:46.780607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:103040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.342 [2024-12-01 15:04:46.780624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.343 [2024-12-01 15:04:46.780637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:103048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.343 [2024-12-01 15:04:46.780649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.343 [2024-12-01 15:04:46.780662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:103056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.343 [2024-12-01 15:04:46.780674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.343 [2024-12-01 15:04:46.780687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:103064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.343 [2024-12-01 15:04:46.780699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.343 [2024-12-01 15:04:46.780713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:103072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.343 [2024-12-01 15:04:46.780724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.343 [2024-12-01 15:04:46.780737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:103080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.343 [2024-12-01 15:04:46.780749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.343 [2024-12-01 15:04:46.780789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:103088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.343 [2024-12-01 15:04:46.780819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.343 [2024-12-01 15:04:46.780833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:103096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.343 [2024-12-01 15:04:46.780846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.343 [2024-12-01 15:04:46.780866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:103104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.343 [2024-12-01 15:04:46.780880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.343 [2024-12-01 15:04:46.780893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:103112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.343 [2024-12-01 15:04:46.780905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.343 [2024-12-01 15:04:46.780918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:103120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.343 [2024-12-01 15:04:46.780930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.343 [2024-12-01 15:04:46.780944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:103128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.343 [2024-12-01 15:04:46.780955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.343 [2024-12-01 15:04:46.780968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:103136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.343 [2024-12-01 15:04:46.780981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.343 [2024-12-01 15:04:46.780995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:103144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.343 [2024-12-01 15:04:46.781007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.343 [2024-12-01 15:04:46.781020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:103152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.343 [2024-12-01 15:04:46.781032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.343 [2024-12-01 15:04:46.781045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:103160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.343 [2024-12-01 15:04:46.781057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.343 [2024-12-01 15:04:46.781071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:103168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.343 [2024-12-01 15:04:46.781087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.343 [2024-12-01 15:04:46.781101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:103176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.343 [2024-12-01 15:04:46.781113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.343 [2024-12-01 15:04:46.781153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:103184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.343 [2024-12-01 15:04:46.781178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.343 [2024-12-01 15:04:46.781191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:103192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.343 [2024-12-01 15:04:46.781203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.343 [2024-12-01 15:04:46.781216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:103200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.343 [2024-12-01 15:04:46.781233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.343 [2024-12-01 15:04:46.781247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:103208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.343 [2024-12-01 15:04:46.781258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.343 [2024-12-01 15:04:46.781271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:103216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.343 [2024-12-01 15:04:46.781282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.343 [2024-12-01 15:04:46.781295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:103224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.343 [2024-12-01 15:04:46.781306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.343 [2024-12-01 15:04:46.781320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:103232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.343 [2024-12-01 15:04:46.781332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.343 [2024-12-01 15:04:46.781344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:103240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.343 [2024-12-01 15:04:46.781356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.343 [2024-12-01 15:04:46.781369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:103248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.343 [2024-12-01 15:04:46.781380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.343 [2024-12-01 15:04:46.781394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:103256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.343 [2024-12-01 15:04:46.781406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.343 [2024-12-01 15:04:46.781418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:103264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.343 [2024-12-01 15:04:46.781431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.343 [2024-12-01 15:04:46.781444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:103272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.343 [2024-12-01 15:04:46.781455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.343 [2024-12-01 15:04:46.781468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:103280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.343 [2024-12-01 15:04:46.781480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.343 [2024-12-01 15:04:46.781493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:103288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.343 [2024-12-01 15:04:46.781505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.343 [2024-12-01 15:04:46.781517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:103296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.343 [2024-12-01 15:04:46.781559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.343 [2024-12-01 15:04:46.781597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:103304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.343 [2024-12-01 15:04:46.781611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.343 [2024-12-01 15:04:46.781625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:103312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.343 [2024-12-01 15:04:46.781637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.343 [2024-12-01 15:04:46.781651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:103320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.343 [2024-12-01 15:04:46.781664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.343 [2024-12-01 15:04:46.781677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:103328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.343 [2024-12-01 15:04:46.781690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.343 [2024-12-01 15:04:46.781704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:103336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.343 [2024-12-01 15:04:46.781716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.343 [2024-12-01 15:04:46.781730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:103344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.343 [2024-12-01 15:04:46.781743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.343 [2024-12-01 15:04:46.781756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:103352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.343 [2024-12-01 15:04:46.781769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.343 [2024-12-01 15:04:46.781794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:103360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.344 [2024-12-01 15:04:46.781821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.344 [2024-12-01 15:04:46.781835] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x209e060 is same with the state(5) to be set 00:24:34.344 [2024-12-01 15:04:46.781892] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x209e060 was disconnected and freed. reset controller. 00:24:34.344 [2024-12-01 15:04:46.783137] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:34.344 [2024-12-01 15:04:46.783187] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:0014000c cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.344 [2024-12-01 15:04:46.783205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.344 [2024-12-01 15:04:46.783228] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20afa00 (9): Bad file descriptor 00:24:34.344 [2024-12-01 15:04:46.783390] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:34.344 [2024-12-01 15:04:46.783445] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:34.344 [2024-12-01 15:04:46.783466] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20afa00 with addr=10.0.0.2, port=4421 00:24:34.344 [2024-12-01 15:04:46.783481] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20afa00 is same with the state(5) to be set 00:24:34.344 [2024-12-01 15:04:46.783504] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20afa00 (9): Bad file descriptor 00:24:34.344 [2024-12-01 15:04:46.783539] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:34.344 [2024-12-01 15:04:46.783553] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:34.344 [2024-12-01 15:04:46.783565] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:34.344 [2024-12-01 15:04:46.783587] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:34.344 [2024-12-01 15:04:46.783600] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:34.344 [2024-12-01 15:04:56.834832] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:24:34.344 Received shutdown signal, test time was about 55.210554 seconds 00:24:34.344 00:24:34.344 Latency(us) 00:24:34.344 [2024-12-01T15:05:07.459Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:34.344 [2024-12-01T15:05:07.459Z] Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:24:34.344 Verification LBA range: start 0x0 length 0x4000 00:24:34.344 Nvme0n1 : 55.21 12159.55 47.50 0.00 0.00 10511.52 268.10 7015926.69 00:24:34.344 [2024-12-01T15:05:07.459Z] =================================================================================================================== 00:24:34.344 [2024-12-01T15:05:07.459Z] Total : 12159.55 47.50 0.00 0.00 10511.52 268.10 7015926.69 00:24:34.344 15:05:07 -- host/multipath.sh@120 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:24:34.344 15:05:07 -- host/multipath.sh@122 -- # trap - SIGINT SIGTERM EXIT 00:24:34.344 15:05:07 -- host/multipath.sh@124 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:24:34.344 15:05:07 -- host/multipath.sh@125 -- # nvmftestfini 00:24:34.344 15:05:07 -- nvmf/common.sh@476 -- # nvmfcleanup 00:24:34.344 15:05:07 -- nvmf/common.sh@116 -- # sync 00:24:34.603 15:05:07 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:24:34.603 15:05:07 -- nvmf/common.sh@119 -- # set +e 00:24:34.603 15:05:07 -- nvmf/common.sh@120 -- # for i in {1..20} 00:24:34.603 15:05:07 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:24:34.603 rmmod nvme_tcp 00:24:34.603 rmmod nvme_fabrics 00:24:34.603 rmmod nvme_keyring 00:24:34.603 15:05:07 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:24:34.603 15:05:07 -- nvmf/common.sh@123 -- # set -e 00:24:34.603 15:05:07 -- nvmf/common.sh@124 -- # return 0 00:24:34.603 15:05:07 -- nvmf/common.sh@477 -- # '[' -n 99032 ']' 00:24:34.603 15:05:07 -- nvmf/common.sh@478 -- # killprocess 99032 00:24:34.603 15:05:07 -- common/autotest_common.sh@936 -- # '[' -z 99032 ']' 00:24:34.603 15:05:07 -- common/autotest_common.sh@940 -- # kill -0 99032 00:24:34.603 15:05:07 -- common/autotest_common.sh@941 -- # uname 00:24:34.603 15:05:07 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:24:34.603 15:05:07 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 99032 00:24:34.603 killing process with pid 99032 00:24:34.603 15:05:07 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:24:34.603 15:05:07 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:24:34.603 15:05:07 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 99032' 00:24:34.603 15:05:07 -- common/autotest_common.sh@955 -- # kill 99032 00:24:34.603 15:05:07 -- common/autotest_common.sh@960 -- # wait 99032 00:24:34.862 15:05:07 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:24:34.862 15:05:07 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:24:34.862 15:05:07 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:24:34.862 15:05:07 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:24:34.862 15:05:07 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:24:34.862 15:05:07 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:34.862 15:05:07 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:34.862 15:05:07 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:34.862 15:05:07 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:24:34.862 00:24:34.862 real 1m1.306s 00:24:34.862 user 2m51.833s 00:24:34.862 sys 0m14.630s 00:24:34.862 15:05:07 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:24:34.862 15:05:07 -- common/autotest_common.sh@10 -- # set +x 00:24:34.862 ************************************ 00:24:34.862 END TEST nvmf_multipath 00:24:34.862 ************************************ 00:24:34.862 15:05:07 -- nvmf/nvmf.sh@117 -- # run_test nvmf_timeout /home/vagrant/spdk_repo/spdk/test/nvmf/host/timeout.sh --transport=tcp 00:24:34.862 15:05:07 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:24:34.862 15:05:07 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:24:34.862 15:05:07 -- common/autotest_common.sh@10 -- # set +x 00:24:34.862 ************************************ 00:24:34.862 START TEST nvmf_timeout 00:24:34.862 ************************************ 00:24:34.862 15:05:07 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/timeout.sh --transport=tcp 00:24:34.862 * Looking for test storage... 00:24:34.862 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:24:34.862 15:05:07 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:24:34.863 15:05:07 -- common/autotest_common.sh@1690 -- # lcov --version 00:24:34.863 15:05:07 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:24:35.122 15:05:08 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:24:35.122 15:05:08 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:24:35.122 15:05:08 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:24:35.122 15:05:08 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:24:35.122 15:05:08 -- scripts/common.sh@335 -- # IFS=.-: 00:24:35.122 15:05:08 -- scripts/common.sh@335 -- # read -ra ver1 00:24:35.122 15:05:08 -- scripts/common.sh@336 -- # IFS=.-: 00:24:35.122 15:05:08 -- scripts/common.sh@336 -- # read -ra ver2 00:24:35.122 15:05:08 -- scripts/common.sh@337 -- # local 'op=<' 00:24:35.122 15:05:08 -- scripts/common.sh@339 -- # ver1_l=2 00:24:35.122 15:05:08 -- scripts/common.sh@340 -- # ver2_l=1 00:24:35.122 15:05:08 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:24:35.122 15:05:08 -- scripts/common.sh@343 -- # case "$op" in 00:24:35.122 15:05:08 -- scripts/common.sh@344 -- # : 1 00:24:35.122 15:05:08 -- scripts/common.sh@363 -- # (( v = 0 )) 00:24:35.122 15:05:08 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:35.122 15:05:08 -- scripts/common.sh@364 -- # decimal 1 00:24:35.122 15:05:08 -- scripts/common.sh@352 -- # local d=1 00:24:35.122 15:05:08 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:35.122 15:05:08 -- scripts/common.sh@354 -- # echo 1 00:24:35.122 15:05:08 -- scripts/common.sh@364 -- # ver1[v]=1 00:24:35.122 15:05:08 -- scripts/common.sh@365 -- # decimal 2 00:24:35.122 15:05:08 -- scripts/common.sh@352 -- # local d=2 00:24:35.122 15:05:08 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:35.122 15:05:08 -- scripts/common.sh@354 -- # echo 2 00:24:35.122 15:05:08 -- scripts/common.sh@365 -- # ver2[v]=2 00:24:35.122 15:05:08 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:24:35.122 15:05:08 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:24:35.122 15:05:08 -- scripts/common.sh@367 -- # return 0 00:24:35.122 15:05:08 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:35.122 15:05:08 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:24:35.122 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:35.122 --rc genhtml_branch_coverage=1 00:24:35.122 --rc genhtml_function_coverage=1 00:24:35.122 --rc genhtml_legend=1 00:24:35.122 --rc geninfo_all_blocks=1 00:24:35.122 --rc geninfo_unexecuted_blocks=1 00:24:35.122 00:24:35.122 ' 00:24:35.122 15:05:08 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:24:35.122 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:35.122 --rc genhtml_branch_coverage=1 00:24:35.122 --rc genhtml_function_coverage=1 00:24:35.122 --rc genhtml_legend=1 00:24:35.122 --rc geninfo_all_blocks=1 00:24:35.122 --rc geninfo_unexecuted_blocks=1 00:24:35.122 00:24:35.122 ' 00:24:35.122 15:05:08 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:24:35.122 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:35.122 --rc genhtml_branch_coverage=1 00:24:35.122 --rc genhtml_function_coverage=1 00:24:35.122 --rc genhtml_legend=1 00:24:35.122 --rc geninfo_all_blocks=1 00:24:35.122 --rc geninfo_unexecuted_blocks=1 00:24:35.122 00:24:35.122 ' 00:24:35.122 15:05:08 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:24:35.122 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:35.122 --rc genhtml_branch_coverage=1 00:24:35.122 --rc genhtml_function_coverage=1 00:24:35.122 --rc genhtml_legend=1 00:24:35.122 --rc geninfo_all_blocks=1 00:24:35.122 --rc geninfo_unexecuted_blocks=1 00:24:35.122 00:24:35.122 ' 00:24:35.122 15:05:08 -- host/timeout.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:24:35.122 15:05:08 -- nvmf/common.sh@7 -- # uname -s 00:24:35.122 15:05:08 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:35.122 15:05:08 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:35.122 15:05:08 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:35.122 15:05:08 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:35.122 15:05:08 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:35.122 15:05:08 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:35.122 15:05:08 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:35.122 15:05:08 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:35.122 15:05:08 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:35.123 15:05:08 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:35.123 15:05:08 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:2d843004-a791-47f3-8dd7-3d04462c368b 00:24:35.123 15:05:08 -- nvmf/common.sh@18 -- # NVME_HOSTID=2d843004-a791-47f3-8dd7-3d04462c368b 00:24:35.123 15:05:08 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:35.123 15:05:08 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:35.123 15:05:08 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:24:35.123 15:05:08 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:24:35.123 15:05:08 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:35.123 15:05:08 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:35.123 15:05:08 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:35.123 15:05:08 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:35.123 15:05:08 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:35.123 15:05:08 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:35.123 15:05:08 -- paths/export.sh@5 -- # export PATH 00:24:35.123 15:05:08 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:35.123 15:05:08 -- nvmf/common.sh@46 -- # : 0 00:24:35.123 15:05:08 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:24:35.123 15:05:08 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:24:35.123 15:05:08 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:24:35.123 15:05:08 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:35.123 15:05:08 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:35.123 15:05:08 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:24:35.123 15:05:08 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:24:35.123 15:05:08 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:24:35.123 15:05:08 -- host/timeout.sh@11 -- # MALLOC_BDEV_SIZE=64 00:24:35.123 15:05:08 -- host/timeout.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:24:35.123 15:05:08 -- host/timeout.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:24:35.123 15:05:08 -- host/timeout.sh@15 -- # bpf_sh=/home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 00:24:35.123 15:05:08 -- host/timeout.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:24:35.123 15:05:08 -- host/timeout.sh@19 -- # nvmftestinit 00:24:35.123 15:05:08 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:24:35.123 15:05:08 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:35.123 15:05:08 -- nvmf/common.sh@436 -- # prepare_net_devs 00:24:35.123 15:05:08 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:24:35.123 15:05:08 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:24:35.123 15:05:08 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:35.123 15:05:08 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:35.123 15:05:08 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:35.123 15:05:08 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:24:35.123 15:05:08 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:24:35.123 15:05:08 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:24:35.123 15:05:08 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:24:35.123 15:05:08 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:24:35.123 15:05:08 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:24:35.123 15:05:08 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:35.123 15:05:08 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:35.123 15:05:08 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:24:35.123 15:05:08 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:24:35.123 15:05:08 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:24:35.123 15:05:08 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:24:35.123 15:05:08 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:24:35.123 15:05:08 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:35.123 15:05:08 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:24:35.123 15:05:08 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:24:35.123 15:05:08 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:24:35.123 15:05:08 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:24:35.123 15:05:08 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:24:35.123 15:05:08 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:24:35.123 Cannot find device "nvmf_tgt_br" 00:24:35.123 15:05:08 -- nvmf/common.sh@154 -- # true 00:24:35.123 15:05:08 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:24:35.123 Cannot find device "nvmf_tgt_br2" 00:24:35.123 15:05:08 -- nvmf/common.sh@155 -- # true 00:24:35.123 15:05:08 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:24:35.123 15:05:08 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:24:35.123 Cannot find device "nvmf_tgt_br" 00:24:35.123 15:05:08 -- nvmf/common.sh@157 -- # true 00:24:35.123 15:05:08 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:24:35.123 Cannot find device "nvmf_tgt_br2" 00:24:35.123 15:05:08 -- nvmf/common.sh@158 -- # true 00:24:35.123 15:05:08 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:24:35.123 15:05:08 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:24:35.123 15:05:08 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:24:35.123 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:24:35.123 15:05:08 -- nvmf/common.sh@161 -- # true 00:24:35.123 15:05:08 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:24:35.123 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:24:35.123 15:05:08 -- nvmf/common.sh@162 -- # true 00:24:35.123 15:05:08 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:24:35.123 15:05:08 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:24:35.123 15:05:08 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:24:35.123 15:05:08 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:24:35.382 15:05:08 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:24:35.382 15:05:08 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:24:35.382 15:05:08 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:24:35.382 15:05:08 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:24:35.382 15:05:08 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:24:35.382 15:05:08 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:24:35.382 15:05:08 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:24:35.382 15:05:08 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:24:35.382 15:05:08 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:24:35.382 15:05:08 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:24:35.382 15:05:08 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:24:35.382 15:05:08 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:24:35.382 15:05:08 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:24:35.382 15:05:08 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:24:35.382 15:05:08 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:24:35.382 15:05:08 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:24:35.382 15:05:08 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:24:35.382 15:05:08 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:24:35.382 15:05:08 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:24:35.382 15:05:08 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:24:35.382 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:35.382 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.069 ms 00:24:35.382 00:24:35.382 --- 10.0.0.2 ping statistics --- 00:24:35.382 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:35.382 rtt min/avg/max/mdev = 0.069/0.069/0.069/0.000 ms 00:24:35.382 15:05:08 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:24:35.382 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:24:35.382 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.031 ms 00:24:35.382 00:24:35.382 --- 10.0.0.3 ping statistics --- 00:24:35.382 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:35.382 rtt min/avg/max/mdev = 0.031/0.031/0.031/0.000 ms 00:24:35.382 15:05:08 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:24:35.382 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:35.382 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.029 ms 00:24:35.382 00:24:35.382 --- 10.0.0.1 ping statistics --- 00:24:35.382 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:35.382 rtt min/avg/max/mdev = 0.029/0.029/0.029/0.000 ms 00:24:35.382 15:05:08 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:35.382 15:05:08 -- nvmf/common.sh@421 -- # return 0 00:24:35.382 15:05:08 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:24:35.382 15:05:08 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:35.382 15:05:08 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:24:35.382 15:05:08 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:24:35.382 15:05:08 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:35.382 15:05:08 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:24:35.382 15:05:08 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:24:35.382 15:05:08 -- host/timeout.sh@21 -- # nvmfappstart -m 0x3 00:24:35.382 15:05:08 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:24:35.382 15:05:08 -- common/autotest_common.sh@722 -- # xtrace_disable 00:24:35.382 15:05:08 -- common/autotest_common.sh@10 -- # set +x 00:24:35.382 15:05:08 -- nvmf/common.sh@469 -- # nvmfpid=100406 00:24:35.382 15:05:08 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:24:35.382 15:05:08 -- nvmf/common.sh@470 -- # waitforlisten 100406 00:24:35.382 15:05:08 -- common/autotest_common.sh@829 -- # '[' -z 100406 ']' 00:24:35.382 15:05:08 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:35.382 15:05:08 -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:35.382 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:35.382 15:05:08 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:35.382 15:05:08 -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:35.382 15:05:08 -- common/autotest_common.sh@10 -- # set +x 00:24:35.641 [2024-12-01 15:05:08.499613] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:24:35.641 [2024-12-01 15:05:08.500325] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:35.641 [2024-12-01 15:05:08.643318] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:24:35.641 [2024-12-01 15:05:08.706069] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:24:35.641 [2024-12-01 15:05:08.706264] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:35.641 [2024-12-01 15:05:08.706282] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:35.641 [2024-12-01 15:05:08.706293] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:35.641 [2024-12-01 15:05:08.706470] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:24:35.641 [2024-12-01 15:05:08.706491] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:24:36.578 15:05:09 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:36.578 15:05:09 -- common/autotest_common.sh@862 -- # return 0 00:24:36.578 15:05:09 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:24:36.578 15:05:09 -- common/autotest_common.sh@728 -- # xtrace_disable 00:24:36.578 15:05:09 -- common/autotest_common.sh@10 -- # set +x 00:24:36.578 15:05:09 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:36.578 15:05:09 -- host/timeout.sh@23 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid || :; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:24:36.578 15:05:09 -- host/timeout.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:24:36.837 [2024-12-01 15:05:09.726241] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:36.837 15:05:09 -- host/timeout.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:24:37.096 Malloc0 00:24:37.096 15:05:10 -- host/timeout.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:24:37.354 15:05:10 -- host/timeout.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:24:37.355 15:05:10 -- host/timeout.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:37.613 [2024-12-01 15:05:10.633120] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:37.613 15:05:10 -- host/timeout.sh@32 -- # bdevperf_pid=100493 00:24:37.613 15:05:10 -- host/timeout.sh@31 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -f 00:24:37.613 15:05:10 -- host/timeout.sh@34 -- # waitforlisten 100493 /var/tmp/bdevperf.sock 00:24:37.613 15:05:10 -- common/autotest_common.sh@829 -- # '[' -z 100493 ']' 00:24:37.613 15:05:10 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:37.613 15:05:10 -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:37.613 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:37.613 15:05:10 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:37.613 15:05:10 -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:37.613 15:05:10 -- common/autotest_common.sh@10 -- # set +x 00:24:37.613 [2024-12-01 15:05:10.694330] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:24:37.613 [2024-12-01 15:05:10.694447] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid100493 ] 00:24:37.873 [2024-12-01 15:05:10.831927] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:37.873 [2024-12-01 15:05:10.924245] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:24:38.810 15:05:11 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:38.810 15:05:11 -- common/autotest_common.sh@862 -- # return 0 00:24:38.810 15:05:11 -- host/timeout.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:24:39.069 15:05:11 -- host/timeout.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 --ctrlr-loss-timeout-sec 5 --reconnect-delay-sec 2 00:24:39.328 NVMe0n1 00:24:39.328 15:05:12 -- host/timeout.sh@51 -- # rpc_pid=100546 00:24:39.328 15:05:12 -- host/timeout.sh@50 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:24:39.328 15:05:12 -- host/timeout.sh@53 -- # sleep 1 00:24:39.328 Running I/O for 10 seconds... 00:24:40.264 15:05:13 -- host/timeout.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:40.525 [2024-12-01 15:05:13.510615] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d3490 is same with the state(5) to be set 00:24:40.525 [2024-12-01 15:05:13.510672] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d3490 is same with the state(5) to be set 00:24:40.525 [2024-12-01 15:05:13.510698] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d3490 is same with the state(5) to be set 00:24:40.525 [2024-12-01 15:05:13.510706] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d3490 is same with the state(5) to be set 00:24:40.525 [2024-12-01 15:05:13.510714] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d3490 is same with the state(5) to be set 00:24:40.525 [2024-12-01 15:05:13.510722] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d3490 is same with the state(5) to be set 00:24:40.525 [2024-12-01 15:05:13.510729] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d3490 is same with the state(5) to be set 00:24:40.525 [2024-12-01 15:05:13.510737] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d3490 is same with the state(5) to be set 00:24:40.525 [2024-12-01 15:05:13.510744] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d3490 is same with the state(5) to be set 00:24:40.525 [2024-12-01 15:05:13.510752] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d3490 is same with the state(5) to be set 00:24:40.525 [2024-12-01 15:05:13.510776] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d3490 is same with the state(5) to be set 00:24:40.526 [2024-12-01 15:05:13.510797] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d3490 is same with the state(5) to be set 00:24:40.526 [2024-12-01 15:05:13.510807] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d3490 is same with the state(5) to be set 00:24:40.526 [2024-12-01 15:05:13.510815] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d3490 is same with the state(5) to be set 00:24:40.526 [2024-12-01 15:05:13.510823] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d3490 is same with the state(5) to be set 00:24:40.526 [2024-12-01 15:05:13.510831] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d3490 is same with the state(5) to be set 00:24:40.526 [2024-12-01 15:05:13.510839] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d3490 is same with the state(5) to be set 00:24:40.526 [2024-12-01 15:05:13.510847] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d3490 is same with the state(5) to be set 00:24:40.526 [2024-12-01 15:05:13.510854] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d3490 is same with the state(5) to be set 00:24:40.526 [2024-12-01 15:05:13.510863] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d3490 is same with the state(5) to be set 00:24:40.526 [2024-12-01 15:05:13.510871] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d3490 is same with the state(5) to be set 00:24:40.526 [2024-12-01 15:05:13.510878] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d3490 is same with the state(5) to be set 00:24:40.526 [2024-12-01 15:05:13.510886] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d3490 is same with the state(5) to be set 00:24:40.526 [2024-12-01 15:05:13.510894] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d3490 is same with the state(5) to be set 00:24:40.526 [2024-12-01 15:05:13.510901] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d3490 is same with the state(5) to be set 00:24:40.526 [2024-12-01 15:05:13.510909] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d3490 is same with the state(5) to be set 00:24:40.526 [2024-12-01 15:05:13.510916] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d3490 is same with the state(5) to be set 00:24:40.526 [2024-12-01 15:05:13.510924] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d3490 is same with the state(5) to be set 00:24:40.526 [2024-12-01 15:05:13.510932] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d3490 is same with the state(5) to be set 00:24:40.526 [2024-12-01 15:05:13.510939] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d3490 is same with the state(5) to be set 00:24:40.526 [2024-12-01 15:05:13.510946] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d3490 is same with the state(5) to be set 00:24:40.526 [2024-12-01 15:05:13.510954] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d3490 is same with the state(5) to be set 00:24:40.526 [2024-12-01 15:05:13.510961] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d3490 is same with the state(5) to be set 00:24:40.526 [2024-12-01 15:05:13.510984] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d3490 is same with the state(5) to be set 00:24:40.526 [2024-12-01 15:05:13.511009] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d3490 is same with the state(5) to be set 00:24:40.526 [2024-12-01 15:05:13.511018] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d3490 is same with the state(5) to be set 00:24:40.526 [2024-12-01 15:05:13.511026] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d3490 is same with the state(5) to be set 00:24:40.526 [2024-12-01 15:05:13.511034] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d3490 is same with the state(5) to be set 00:24:40.526 [2024-12-01 15:05:13.511043] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d3490 is same with the state(5) to be set 00:24:40.526 [2024-12-01 15:05:13.511051] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d3490 is same with the state(5) to be set 00:24:40.526 [2024-12-01 15:05:13.511060] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d3490 is same with the state(5) to be set 00:24:40.526 [2024-12-01 15:05:13.511068] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d3490 is same with the state(5) to be set 00:24:40.526 [2024-12-01 15:05:13.511076] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d3490 is same with the state(5) to be set 00:24:40.526 [2024-12-01 15:05:13.511085] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d3490 is same with the state(5) to be set 00:24:40.526 [2024-12-01 15:05:13.511093] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d3490 is same with the state(5) to be set 00:24:40.526 [2024-12-01 15:05:13.511117] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d3490 is same with the state(5) to be set 00:24:40.526 [2024-12-01 15:05:13.511124] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d3490 is same with the state(5) to be set 00:24:40.526 [2024-12-01 15:05:13.511147] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d3490 is same with the state(5) to be set 00:24:40.526 [2024-12-01 15:05:13.511170] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d3490 is same with the state(5) to be set 00:24:40.526 [2024-12-01 15:05:13.511177] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d3490 is same with the state(5) to be set 00:24:40.526 [2024-12-01 15:05:13.511185] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d3490 is same with the state(5) to be set 00:24:40.526 [2024-12-01 15:05:13.511857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:1712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.526 [2024-12-01 15:05:13.511897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.526 [2024-12-01 15:05:13.511916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:1736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.526 [2024-12-01 15:05:13.511932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.526 [2024-12-01 15:05:13.511942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:1760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.526 [2024-12-01 15:05:13.511950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.526 [2024-12-01 15:05:13.511959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:1776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.526 [2024-12-01 15:05:13.511968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.526 [2024-12-01 15:05:13.511978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:1784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.526 [2024-12-01 15:05:13.511985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.526 [2024-12-01 15:05:13.511994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:1792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.526 [2024-12-01 15:05:13.512002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.526 [2024-12-01 15:05:13.512011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:1800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.526 [2024-12-01 15:05:13.512018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.526 [2024-12-01 15:05:13.512027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:1808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.526 [2024-12-01 15:05:13.512035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.526 [2024-12-01 15:05:13.512043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:2376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.526 [2024-12-01 15:05:13.512051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.526 [2024-12-01 15:05:13.512068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:2408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.526 [2024-12-01 15:05:13.512075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.526 [2024-12-01 15:05:13.512084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:2416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.526 [2024-12-01 15:05:13.512092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.526 [2024-12-01 15:05:13.512101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:2424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.526 [2024-12-01 15:05:13.512116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.526 [2024-12-01 15:05:13.512125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:2448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.526 [2024-12-01 15:05:13.512136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.526 [2024-12-01 15:05:13.512155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:2456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.526 [2024-12-01 15:05:13.512173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.526 [2024-12-01 15:05:13.512182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:2464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.526 [2024-12-01 15:05:13.512190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.526 [2024-12-01 15:05:13.512200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:2472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.526 [2024-12-01 15:05:13.512208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.526 [2024-12-01 15:05:13.512217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:2488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.526 [2024-12-01 15:05:13.512225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.526 [2024-12-01 15:05:13.512234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:1816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.526 [2024-12-01 15:05:13.512242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.526 [2024-12-01 15:05:13.512251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:1832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.526 [2024-12-01 15:05:13.512258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.526 [2024-12-01 15:05:13.512267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:1848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.526 [2024-12-01 15:05:13.512275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.527 [2024-12-01 15:05:13.512284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:1864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.527 [2024-12-01 15:05:13.512291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.527 [2024-12-01 15:05:13.512299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:1904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.527 [2024-12-01 15:05:13.512307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.527 [2024-12-01 15:05:13.512316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:1928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.527 [2024-12-01 15:05:13.512323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.527 [2024-12-01 15:05:13.512332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:1936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.527 [2024-12-01 15:05:13.512339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.527 [2024-12-01 15:05:13.512350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:1968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.527 [2024-12-01 15:05:13.512358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.527 [2024-12-01 15:05:13.512367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:1992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.527 [2024-12-01 15:05:13.512375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.527 [2024-12-01 15:05:13.512385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:2008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.527 [2024-12-01 15:05:13.512393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.527 [2024-12-01 15:05:13.512402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:2016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.527 [2024-12-01 15:05:13.512410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.527 [2024-12-01 15:05:13.512419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:2032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.527 [2024-12-01 15:05:13.512428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.527 [2024-12-01 15:05:13.512437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:2040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.527 [2024-12-01 15:05:13.512444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.527 [2024-12-01 15:05:13.512453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:2072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.527 [2024-12-01 15:05:13.512461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.527 [2024-12-01 15:05:13.512470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:2080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.527 [2024-12-01 15:05:13.512478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.527 [2024-12-01 15:05:13.512486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:2096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.527 [2024-12-01 15:05:13.512494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.527 [2024-12-01 15:05:13.512503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:2528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.527 [2024-12-01 15:05:13.512511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.527 [2024-12-01 15:05:13.512520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:2536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.527 [2024-12-01 15:05:13.512527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.527 [2024-12-01 15:05:13.512536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:2544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.527 [2024-12-01 15:05:13.512544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.527 [2024-12-01 15:05:13.512553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:2560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.527 [2024-12-01 15:05:13.512560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.527 [2024-12-01 15:05:13.512569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:2576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.527 [2024-12-01 15:05:13.512576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.527 [2024-12-01 15:05:13.512585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:2584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.527 [2024-12-01 15:05:13.512593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.527 [2024-12-01 15:05:13.512602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:2600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:40.527 [2024-12-01 15:05:13.512610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.527 [2024-12-01 15:05:13.512619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:2608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.527 [2024-12-01 15:05:13.512627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.527 [2024-12-01 15:05:13.512636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:2616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:40.527 [2024-12-01 15:05:13.512643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.527 [2024-12-01 15:05:13.512652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:2624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.527 [2024-12-01 15:05:13.512659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.527 [2024-12-01 15:05:13.512669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:2632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:40.527 [2024-12-01 15:05:13.512676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.527 [2024-12-01 15:05:13.512685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:2640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:40.527 [2024-12-01 15:05:13.512693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.527 [2024-12-01 15:05:13.512702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:2648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:40.527 [2024-12-01 15:05:13.512709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.527 [2024-12-01 15:05:13.512718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:2656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:40.527 [2024-12-01 15:05:13.512726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.527 [2024-12-01 15:05:13.512735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:2664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.527 [2024-12-01 15:05:13.512762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.527 [2024-12-01 15:05:13.512778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:2672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.527 [2024-12-01 15:05:13.512786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.527 [2024-12-01 15:05:13.512795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:2680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:40.527 [2024-12-01 15:05:13.512802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.527 [2024-12-01 15:05:13.512811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:40.527 [2024-12-01 15:05:13.512819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.527 [2024-12-01 15:05:13.512830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:2696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.527 [2024-12-01 15:05:13.512837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.527 [2024-12-01 15:05:13.512846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:2704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:40.527 [2024-12-01 15:05:13.512854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.527 [2024-12-01 15:05:13.512864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:2712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:40.527 [2024-12-01 15:05:13.512872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.528 [2024-12-01 15:05:13.512882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:2720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.528 [2024-12-01 15:05:13.512889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.528 [2024-12-01 15:05:13.512899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:2728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.528 [2024-12-01 15:05:13.512906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.528 [2024-12-01 15:05:13.512915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:2736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.528 [2024-12-01 15:05:13.512923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.528 [2024-12-01 15:05:13.512932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:2744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:40.528 [2024-12-01 15:05:13.512939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.528 [2024-12-01 15:05:13.512949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:2752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.528 [2024-12-01 15:05:13.512957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.528 [2024-12-01 15:05:13.512966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:2760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:40.528 [2024-12-01 15:05:13.512974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.528 [2024-12-01 15:05:13.512983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:2768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.528 [2024-12-01 15:05:13.512991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.528 [2024-12-01 15:05:13.513000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:2776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:40.528 [2024-12-01 15:05:13.513007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.528 [2024-12-01 15:05:13.513015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:2784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.528 [2024-12-01 15:05:13.513030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.528 [2024-12-01 15:05:13.513039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:2792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:40.528 [2024-12-01 15:05:13.513046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.528 [2024-12-01 15:05:13.513056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:2800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.528 [2024-12-01 15:05:13.513064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.528 [2024-12-01 15:05:13.513072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:2808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:40.528 [2024-12-01 15:05:13.513080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.528 [2024-12-01 15:05:13.513089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:2816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:40.528 [2024-12-01 15:05:13.513096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.528 [2024-12-01 15:05:13.513105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:2824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:40.528 [2024-12-01 15:05:13.513114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.528 [2024-12-01 15:05:13.513135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:2832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.528 [2024-12-01 15:05:13.513142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.528 [2024-12-01 15:05:13.513151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:2840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.528 [2024-12-01 15:05:13.513159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.528 [2024-12-01 15:05:13.513168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:2848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:40.528 [2024-12-01 15:05:13.513176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.528 [2024-12-01 15:05:13.513194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:2856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.528 [2024-12-01 15:05:13.513202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.528 [2024-12-01 15:05:13.513211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:2864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:40.528 [2024-12-01 15:05:13.513218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.528 [2024-12-01 15:05:13.513227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:2104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.528 [2024-12-01 15:05:13.513234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.528 [2024-12-01 15:05:13.513243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:2112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.528 [2024-12-01 15:05:13.513251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.528 [2024-12-01 15:05:13.513259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:2144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.528 [2024-12-01 15:05:13.513266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.528 [2024-12-01 15:05:13.513275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:2152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.528 [2024-12-01 15:05:13.513283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.528 [2024-12-01 15:05:13.513292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:2176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.528 [2024-12-01 15:05:13.513300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.528 [2024-12-01 15:05:13.513308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:2192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.528 [2024-12-01 15:05:13.513322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.528 [2024-12-01 15:05:13.513331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:2200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.528 [2024-12-01 15:05:13.513340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.528 [2024-12-01 15:05:13.513348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:2232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.528 [2024-12-01 15:05:13.513358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.528 [2024-12-01 15:05:13.513367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:2248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.528 [2024-12-01 15:05:13.513374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.528 [2024-12-01 15:05:13.513383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:2256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.528 [2024-12-01 15:05:13.513390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.528 [2024-12-01 15:05:13.513399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:2272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.528 [2024-12-01 15:05:13.513407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.528 [2024-12-01 15:05:13.513415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:2288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.528 [2024-12-01 15:05:13.513423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.528 [2024-12-01 15:05:13.513431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:2312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.528 [2024-12-01 15:05:13.513439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.528 [2024-12-01 15:05:13.513447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:2328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.528 [2024-12-01 15:05:13.513454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.528 [2024-12-01 15:05:13.513463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:2336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.528 [2024-12-01 15:05:13.513471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.528 [2024-12-01 15:05:13.513479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:2344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.528 [2024-12-01 15:05:13.513487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.528 [2024-12-01 15:05:13.513496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:2872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.528 [2024-12-01 15:05:13.513504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.528 [2024-12-01 15:05:13.513512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:2880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:40.528 [2024-12-01 15:05:13.513519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.528 [2024-12-01 15:05:13.513528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:2888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.528 [2024-12-01 15:05:13.513536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.528 [2024-12-01 15:05:13.513545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:2896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:40.528 [2024-12-01 15:05:13.513552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.528 [2024-12-01 15:05:13.513561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:2904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:40.528 [2024-12-01 15:05:13.513605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.529 [2024-12-01 15:05:13.513616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:2912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.529 [2024-12-01 15:05:13.513626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.529 [2024-12-01 15:05:13.513635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:2920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:40.529 [2024-12-01 15:05:13.513643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.529 [2024-12-01 15:05:13.513652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:2928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.529 [2024-12-01 15:05:13.513661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.529 [2024-12-01 15:05:13.513670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:2936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:40.529 [2024-12-01 15:05:13.513678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.529 [2024-12-01 15:05:13.513687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:2944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:40.529 [2024-12-01 15:05:13.513695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.529 [2024-12-01 15:05:13.513705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:2952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.529 [2024-12-01 15:05:13.513712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.529 [2024-12-01 15:05:13.513722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:2960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:40.529 [2024-12-01 15:05:13.513731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.529 [2024-12-01 15:05:13.513740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:2968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:40.529 [2024-12-01 15:05:13.513748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.529 [2024-12-01 15:05:13.513757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:2976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.529 [2024-12-01 15:05:13.513764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.529 [2024-12-01 15:05:13.513774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:2984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:40.529 [2024-12-01 15:05:13.513790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.529 [2024-12-01 15:05:13.513801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:2992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.529 [2024-12-01 15:05:13.513809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.529 [2024-12-01 15:05:13.513818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:3000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.529 [2024-12-01 15:05:13.513825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.529 [2024-12-01 15:05:13.513835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:3008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:40.529 [2024-12-01 15:05:13.513843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.529 [2024-12-01 15:05:13.513852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:2352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.529 [2024-12-01 15:05:13.513860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.529 [2024-12-01 15:05:13.513869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:2360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.529 [2024-12-01 15:05:13.513876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.529 [2024-12-01 15:05:13.513888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:2368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.529 [2024-12-01 15:05:13.513896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.529 [2024-12-01 15:05:13.513905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:2384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.529 [2024-12-01 15:05:13.513915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.529 [2024-12-01 15:05:13.513925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:2392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.529 [2024-12-01 15:05:13.513933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.529 [2024-12-01 15:05:13.513942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:2400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.529 [2024-12-01 15:05:13.513950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.529 [2024-12-01 15:05:13.513960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:2432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.529 [2024-12-01 15:05:13.513968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.529 [2024-12-01 15:05:13.513993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:2440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.529 [2024-12-01 15:05:13.514000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.529 [2024-12-01 15:05:13.514009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:3016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:40.529 [2024-12-01 15:05:13.514016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.529 [2024-12-01 15:05:13.514025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:3024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.529 [2024-12-01 15:05:13.514032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.529 [2024-12-01 15:05:13.514041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:3032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:40.529 [2024-12-01 15:05:13.514049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.529 [2024-12-01 15:05:13.514057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:3040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.529 [2024-12-01 15:05:13.514065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.529 [2024-12-01 15:05:13.514074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:3048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.529 [2024-12-01 15:05:13.514081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.529 [2024-12-01 15:05:13.514099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:2480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.529 [2024-12-01 15:05:13.514106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.529 [2024-12-01 15:05:13.514114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:2496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.529 [2024-12-01 15:05:13.514122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.529 [2024-12-01 15:05:13.514130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:2504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.529 [2024-12-01 15:05:13.514137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.529 [2024-12-01 15:05:13.514146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:2512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.529 [2024-12-01 15:05:13.514153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.529 [2024-12-01 15:05:13.514162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:2520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.529 [2024-12-01 15:05:13.514169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.529 [2024-12-01 15:05:13.514178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:2552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.529 [2024-12-01 15:05:13.514185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.529 [2024-12-01 15:05:13.514193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:2568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.529 [2024-12-01 15:05:13.514205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.529 [2024-12-01 15:05:13.514214] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xec5780 is same with the state(5) to be set 00:24:40.529 [2024-12-01 15:05:13.514224] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:40.529 [2024-12-01 15:05:13.514230] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:40.529 [2024-12-01 15:05:13.514236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:2592 len:8 PRP1 0x0 PRP2 0x0 00:24:40.529 [2024-12-01 15:05:13.514243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.529 [2024-12-01 15:05:13.514284] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0xec5780 was disconnected and freed. reset controller. 00:24:40.529 [2024-12-01 15:05:13.514463] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:40.529 [2024-12-01 15:05:13.514537] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe408c0 (9): Bad file descriptor 00:24:40.529 [2024-12-01 15:05:13.514607] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:40.529 [2024-12-01 15:05:13.514655] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:40.530 [2024-12-01 15:05:13.514670] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe408c0 with addr=10.0.0.2, port=4420 00:24:40.530 [2024-12-01 15:05:13.514678] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe408c0 is same with the state(5) to be set 00:24:40.530 [2024-12-01 15:05:13.514693] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe408c0 (9): Bad file descriptor 00:24:40.530 [2024-12-01 15:05:13.514706] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:40.530 [2024-12-01 15:05:13.514714] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:40.530 [2024-12-01 15:05:13.514722] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:40.530 [2024-12-01 15:05:13.514738] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:40.530 [2024-12-01 15:05:13.514746] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:40.530 15:05:13 -- host/timeout.sh@56 -- # sleep 2 00:24:42.432 [2024-12-01 15:05:15.514822] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:42.432 [2024-12-01 15:05:15.514884] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:42.432 [2024-12-01 15:05:15.514900] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe408c0 with addr=10.0.0.2, port=4420 00:24:42.432 [2024-12-01 15:05:15.514910] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe408c0 is same with the state(5) to be set 00:24:42.432 [2024-12-01 15:05:15.514927] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe408c0 (9): Bad file descriptor 00:24:42.432 [2024-12-01 15:05:15.514940] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:42.432 [2024-12-01 15:05:15.514949] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:42.432 [2024-12-01 15:05:15.514956] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:42.432 [2024-12-01 15:05:15.514972] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:42.432 [2024-12-01 15:05:15.514982] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:42.432 15:05:15 -- host/timeout.sh@57 -- # get_controller 00:24:42.432 15:05:15 -- host/timeout.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:24:42.432 15:05:15 -- host/timeout.sh@41 -- # jq -r '.[].name' 00:24:42.690 15:05:15 -- host/timeout.sh@57 -- # [[ NVMe0 == \N\V\M\e\0 ]] 00:24:42.690 15:05:15 -- host/timeout.sh@58 -- # get_bdev 00:24:42.690 15:05:15 -- host/timeout.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs 00:24:42.690 15:05:15 -- host/timeout.sh@37 -- # jq -r '.[].name' 00:24:42.949 15:05:16 -- host/timeout.sh@58 -- # [[ NVMe0n1 == \N\V\M\e\0\n\1 ]] 00:24:42.949 15:05:16 -- host/timeout.sh@61 -- # sleep 5 00:24:44.847 [2024-12-01 15:05:17.515107] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.847 [2024-12-01 15:05:17.515173] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.847 [2024-12-01 15:05:17.515189] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe408c0 with addr=10.0.0.2, port=4420 00:24:44.847 [2024-12-01 15:05:17.515199] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe408c0 is same with the state(5) to be set 00:24:44.847 [2024-12-01 15:05:17.515216] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe408c0 (9): Bad file descriptor 00:24:44.847 [2024-12-01 15:05:17.515230] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:44.847 [2024-12-01 15:05:17.515239] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:44.847 [2024-12-01 15:05:17.515247] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:44.847 [2024-12-01 15:05:17.515263] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:44.847 [2024-12-01 15:05:17.515272] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:46.749 [2024-12-01 15:05:19.515356] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:46.749 [2024-12-01 15:05:19.515398] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:46.749 [2024-12-01 15:05:19.515421] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:46.749 [2024-12-01 15:05:19.515428] nvme_ctrlr.c:1017:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] already in failed state 00:24:46.749 [2024-12-01 15:05:19.515444] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:47.685 00:24:47.685 Latency(us) 00:24:47.685 [2024-12-01T15:05:20.800Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:47.685 [2024-12-01T15:05:20.800Z] Job: NVMe0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:24:47.685 Verification LBA range: start 0x0 length 0x4000 00:24:47.685 NVMe0n1 : 8.17 2036.80 7.96 15.67 0.00 62281.33 2621.44 7015926.69 00:24:47.685 [2024-12-01T15:05:20.800Z] =================================================================================================================== 00:24:47.685 [2024-12-01T15:05:20.800Z] Total : 2036.80 7.96 15.67 0.00 62281.33 2621.44 7015926.69 00:24:47.685 0 00:24:47.944 15:05:21 -- host/timeout.sh@62 -- # get_controller 00:24:47.944 15:05:21 -- host/timeout.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:24:47.944 15:05:21 -- host/timeout.sh@41 -- # jq -r '.[].name' 00:24:48.203 15:05:21 -- host/timeout.sh@62 -- # [[ '' == '' ]] 00:24:48.203 15:05:21 -- host/timeout.sh@63 -- # get_bdev 00:24:48.203 15:05:21 -- host/timeout.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs 00:24:48.203 15:05:21 -- host/timeout.sh@37 -- # jq -r '.[].name' 00:24:48.462 15:05:21 -- host/timeout.sh@63 -- # [[ '' == '' ]] 00:24:48.462 15:05:21 -- host/timeout.sh@65 -- # wait 100546 00:24:48.462 15:05:21 -- host/timeout.sh@67 -- # killprocess 100493 00:24:48.462 15:05:21 -- common/autotest_common.sh@936 -- # '[' -z 100493 ']' 00:24:48.462 15:05:21 -- common/autotest_common.sh@940 -- # kill -0 100493 00:24:48.462 15:05:21 -- common/autotest_common.sh@941 -- # uname 00:24:48.462 15:05:21 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:24:48.462 15:05:21 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 100493 00:24:48.462 15:05:21 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:24:48.462 15:05:21 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:24:48.462 killing process with pid 100493 00:24:48.462 15:05:21 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 100493' 00:24:48.462 Received shutdown signal, test time was about 9.202277 seconds 00:24:48.462 00:24:48.462 Latency(us) 00:24:48.462 [2024-12-01T15:05:21.577Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:48.462 [2024-12-01T15:05:21.577Z] =================================================================================================================== 00:24:48.462 [2024-12-01T15:05:21.577Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:48.462 15:05:21 -- common/autotest_common.sh@955 -- # kill 100493 00:24:48.462 15:05:21 -- common/autotest_common.sh@960 -- # wait 100493 00:24:48.722 15:05:21 -- host/timeout.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:48.981 [2024-12-01 15:05:21.944575] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:48.981 15:05:21 -- host/timeout.sh@74 -- # bdevperf_pid=100704 00:24:48.981 15:05:21 -- host/timeout.sh@76 -- # waitforlisten 100704 /var/tmp/bdevperf.sock 00:24:48.981 15:05:21 -- common/autotest_common.sh@829 -- # '[' -z 100704 ']' 00:24:48.981 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:48.981 15:05:21 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:48.981 15:05:21 -- host/timeout.sh@73 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -f 00:24:48.981 15:05:21 -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:48.981 15:05:21 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:48.981 15:05:21 -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:48.981 15:05:21 -- common/autotest_common.sh@10 -- # set +x 00:24:48.981 [2024-12-01 15:05:22.012395] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:24:48.981 [2024-12-01 15:05:22.012499] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid100704 ] 00:24:49.240 [2024-12-01 15:05:22.147476] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:49.240 [2024-12-01 15:05:22.204128] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:24:50.177 15:05:22 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:50.177 15:05:22 -- common/autotest_common.sh@862 -- # return 0 00:24:50.177 15:05:22 -- host/timeout.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:24:50.177 15:05:23 -- host/timeout.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 --ctrlr-loss-timeout-sec 5 --fast-io-fail-timeout-sec 2 --reconnect-delay-sec 1 00:24:50.435 NVMe0n1 00:24:50.435 15:05:23 -- host/timeout.sh@83 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:24:50.435 15:05:23 -- host/timeout.sh@84 -- # rpc_pid=100753 00:24:50.435 15:05:23 -- host/timeout.sh@86 -- # sleep 1 00:24:50.694 Running I/O for 10 seconds... 00:24:51.633 15:05:24 -- host/timeout.sh@87 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:51.633 [2024-12-01 15:05:24.735050] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a78ca0 is same with the state(5) to be set 00:24:51.633 [2024-12-01 15:05:24.735107] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a78ca0 is same with the state(5) to be set 00:24:51.633 [2024-12-01 15:05:24.735116] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a78ca0 is same with the state(5) to be set 00:24:51.633 [2024-12-01 15:05:24.735133] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a78ca0 is same with the state(5) to be set 00:24:51.633 [2024-12-01 15:05:24.735141] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a78ca0 is same with the state(5) to be set 00:24:51.633 [2024-12-01 15:05:24.735156] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a78ca0 is same with the state(5) to be set 00:24:51.633 [2024-12-01 15:05:24.735163] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a78ca0 is same with the state(5) to be set 00:24:51.633 [2024-12-01 15:05:24.735177] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a78ca0 is same with the state(5) to be set 00:24:51.633 [2024-12-01 15:05:24.735184] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a78ca0 is same with the state(5) to be set 00:24:51.633 [2024-12-01 15:05:24.735192] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a78ca0 is same with the state(5) to be set 00:24:51.633 [2024-12-01 15:05:24.735199] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a78ca0 is same with the state(5) to be set 00:24:51.633 [2024-12-01 15:05:24.735206] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a78ca0 is same with the state(5) to be set 00:24:51.633 [2024-12-01 15:05:24.735212] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a78ca0 is same with the state(5) to be set 00:24:51.633 [2024-12-01 15:05:24.735219] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a78ca0 is same with the state(5) to be set 00:24:51.633 [2024-12-01 15:05:24.735226] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a78ca0 is same with the state(5) to be set 00:24:51.633 [2024-12-01 15:05:24.735233] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a78ca0 is same with the state(5) to be set 00:24:51.633 [2024-12-01 15:05:24.735240] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a78ca0 is same with the state(5) to be set 00:24:51.633 [2024-12-01 15:05:24.735246] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a78ca0 is same with the state(5) to be set 00:24:51.633 [2024-12-01 15:05:24.735253] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a78ca0 is same with the state(5) to be set 00:24:51.633 [2024-12-01 15:05:24.735260] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a78ca0 is same with the state(5) to be set 00:24:51.633 [2024-12-01 15:05:24.735617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:9096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:51.633 [2024-12-01 15:05:24.735642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:51.633 [2024-12-01 15:05:24.735661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:8456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:51.633 [2024-12-01 15:05:24.735670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:51.633 [2024-12-01 15:05:24.735679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:8464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:51.633 [2024-12-01 15:05:24.735687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:51.634 [2024-12-01 15:05:24.735696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:8496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:51.634 [2024-12-01 15:05:24.735704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:51.634 [2024-12-01 15:05:24.735712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:8504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:51.634 [2024-12-01 15:05:24.735719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:51.634 [2024-12-01 15:05:24.735729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:8512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:51.634 [2024-12-01 15:05:24.735736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:51.634 [2024-12-01 15:05:24.735745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:8520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:51.634 [2024-12-01 15:05:24.735763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:51.634 [2024-12-01 15:05:24.735786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:8584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:51.634 [2024-12-01 15:05:24.735794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:51.634 [2024-12-01 15:05:24.735803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:8592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:51.634 [2024-12-01 15:05:24.735811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:51.634 [2024-12-01 15:05:24.735819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:9128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:51.634 [2024-12-01 15:05:24.735827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:51.634 [2024-12-01 15:05:24.735835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:8600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:51.634 [2024-12-01 15:05:24.735843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:51.634 [2024-12-01 15:05:24.735852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:8616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:51.634 [2024-12-01 15:05:24.735859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:51.634 [2024-12-01 15:05:24.735869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:8624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:51.634 [2024-12-01 15:05:24.735876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:51.634 [2024-12-01 15:05:24.735885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:8640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:51.634 [2024-12-01 15:05:24.735896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:51.634 [2024-12-01 15:05:24.735905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:8656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:51.634 [2024-12-01 15:05:24.735912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:51.634 [2024-12-01 15:05:24.735921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:8680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:51.634 [2024-12-01 15:05:24.735928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:51.634 [2024-12-01 15:05:24.735937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:8720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:51.634 [2024-12-01 15:05:24.735947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:51.634 [2024-12-01 15:05:24.735956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:8736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:51.634 [2024-12-01 15:05:24.735964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:51.634 [2024-12-01 15:05:24.735973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:9160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:51.634 [2024-12-01 15:05:24.735980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:51.634 [2024-12-01 15:05:24.735990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:9192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:51.634 [2024-12-01 15:05:24.735997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:51.634 [2024-12-01 15:05:24.736007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:9216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:51.634 [2024-12-01 15:05:24.736015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:51.634 [2024-12-01 15:05:24.736024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:9232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:51.634 [2024-12-01 15:05:24.736032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:51.634 [2024-12-01 15:05:24.736041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:9240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:51.634 [2024-12-01 15:05:24.736048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:51.634 [2024-12-01 15:05:24.736057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:9248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:51.634 [2024-12-01 15:05:24.736065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:51.634 [2024-12-01 15:05:24.736073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:9256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:51.634 [2024-12-01 15:05:24.736081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:51.634 [2024-12-01 15:05:24.736089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:9264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:51.634 [2024-12-01 15:05:24.736096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:51.634 [2024-12-01 15:05:24.736106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:9272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:51.634 [2024-12-01 15:05:24.736113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:51.634 [2024-12-01 15:05:24.736122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:9280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:51.634 [2024-12-01 15:05:24.736129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:51.634 [2024-12-01 15:05:24.736148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:9288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:51.634 [2024-12-01 15:05:24.736155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:51.634 [2024-12-01 15:05:24.736172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:9296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:51.634 [2024-12-01 15:05:24.736179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:51.634 [2024-12-01 15:05:24.736187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:9304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:51.634 [2024-12-01 15:05:24.736194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:51.634 [2024-12-01 15:05:24.736204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:9312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:51.634 [2024-12-01 15:05:24.736211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:51.634 [2024-12-01 15:05:24.736220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:9320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:51.634 [2024-12-01 15:05:24.736227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:51.634 [2024-12-01 15:05:24.736236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:9328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:51.634 [2024-12-01 15:05:24.736243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:51.634 [2024-12-01 15:05:24.736252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:9336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:51.634 [2024-12-01 15:05:24.736259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:51.634 [2024-12-01 15:05:24.736267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:9344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:51.634 [2024-12-01 15:05:24.736277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:51.634 [2024-12-01 15:05:24.736285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:9352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:51.634 [2024-12-01 15:05:24.736293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:51.634 [2024-12-01 15:05:24.736302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:9360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:51.634 [2024-12-01 15:05:24.736310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:51.634 [2024-12-01 15:05:24.736319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:8744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:51.634 [2024-12-01 15:05:24.736326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:51.634 [2024-12-01 15:05:24.736335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:8768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:51.634 [2024-12-01 15:05:24.736343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:51.634 [2024-12-01 15:05:24.736352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:8776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:51.634 [2024-12-01 15:05:24.736359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:51.634 [2024-12-01 15:05:24.736368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:8784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:51.634 [2024-12-01 15:05:24.736375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:51.634 [2024-12-01 15:05:24.736384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:8848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:51.634 [2024-12-01 15:05:24.736391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:51.635 [2024-12-01 15:05:24.736400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:8864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:51.635 [2024-12-01 15:05:24.736407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:51.635 [2024-12-01 15:05:24.736418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:8880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:51.635 [2024-12-01 15:05:24.736426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:51.635 [2024-12-01 15:05:24.736436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:8888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:51.635 [2024-12-01 15:05:24.736444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:51.635 [2024-12-01 15:05:24.736453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:9368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:51.635 [2024-12-01 15:05:24.736461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:51.635 [2024-12-01 15:05:24.736470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:9376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:51.635 [2024-12-01 15:05:24.736478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:51.635 [2024-12-01 15:05:24.736487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:9384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:51.635 [2024-12-01 15:05:24.736496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:51.635 [2024-12-01 15:05:24.736506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:9392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:51.635 [2024-12-01 15:05:24.736513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:51.635 [2024-12-01 15:05:24.736522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:9400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:51.635 [2024-12-01 15:05:24.736529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:51.635 [2024-12-01 15:05:24.736537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:9408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:51.635 [2024-12-01 15:05:24.736545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:51.635 [2024-12-01 15:05:24.736553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:9416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:51.635 [2024-12-01 15:05:24.736560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:51.635 [2024-12-01 15:05:24.736569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:9424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:51.635 [2024-12-01 15:05:24.736576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:51.635 [2024-12-01 15:05:24.736584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:9432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:51.635 [2024-12-01 15:05:24.736591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:51.635 [2024-12-01 15:05:24.736600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:9440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:51.635 [2024-12-01 15:05:24.736608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:51.635 [2024-12-01 15:05:24.736616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:8920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:51.635 [2024-12-01 15:05:24.736623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:51.635 [2024-12-01 15:05:24.736632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:8928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:51.635 [2024-12-01 15:05:24.736639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:51.635 [2024-12-01 15:05:24.736647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:8944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:51.635 [2024-12-01 15:05:24.736654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:51.635 [2024-12-01 15:05:24.736663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:8960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:51.635 [2024-12-01 15:05:24.736670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:51.635 [2024-12-01 15:05:24.736679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:8968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:51.635 [2024-12-01 15:05:24.736687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:51.635 [2024-12-01 15:05:24.736699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:8976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:51.635 [2024-12-01 15:05:24.736707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:51.635 [2024-12-01 15:05:24.736716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:8984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:51.635 [2024-12-01 15:05:24.736725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:51.635 [2024-12-01 15:05:24.736745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:8992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:51.635 [2024-12-01 15:05:24.736761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:51.635 [2024-12-01 15:05:24.736785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:9448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:51.635 [2024-12-01 15:05:24.736801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:51.635 [2024-12-01 15:05:24.736810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:9456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:51.635 [2024-12-01 15:05:24.736818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:51.635 [2024-12-01 15:05:24.736826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:51.635 [2024-12-01 15:05:24.736834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:51.635 [2024-12-01 15:05:24.736843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:9472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:51.635 [2024-12-01 15:05:24.736850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:51.635 [2024-12-01 15:05:24.736859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:9480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:51.635 [2024-12-01 15:05:24.736866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:51.635 [2024-12-01 15:05:24.736875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:9488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:51.635 [2024-12-01 15:05:24.736883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:51.635 [2024-12-01 15:05:24.736891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:9496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:51.635 [2024-12-01 15:05:24.736899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:51.635 [2024-12-01 15:05:24.736907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:9504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:51.635 [2024-12-01 15:05:24.736915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:51.635 [2024-12-01 15:05:24.736923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:9512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:51.635 [2024-12-01 15:05:24.736930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:51.635 [2024-12-01 15:05:24.736940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:9520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:51.635 [2024-12-01 15:05:24.736947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:51.635 [2024-12-01 15:05:24.736956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:9528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:51.635 [2024-12-01 15:05:24.736963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:51.635 [2024-12-01 15:05:24.736972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:9536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:51.635 [2024-12-01 15:05:24.736979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:51.635 [2024-12-01 15:05:24.736989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:9544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:51.635 [2024-12-01 15:05:24.736997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:51.635 [2024-12-01 15:05:24.737005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:9552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:51.635 [2024-12-01 15:05:24.737013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:51.635 [2024-12-01 15:05:24.737021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:9560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:51.635 [2024-12-01 15:05:24.737029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:51.635 [2024-12-01 15:05:24.737038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:9568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:51.635 [2024-12-01 15:05:24.737045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:51.635 [2024-12-01 15:05:24.737054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:9576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:51.635 [2024-12-01 15:05:24.737062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:51.635 [2024-12-01 15:05:24.737070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:9584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:51.635 [2024-12-01 15:05:24.737077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:51.635 [2024-12-01 15:05:24.737086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:9592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:51.635 [2024-12-01 15:05:24.737093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:51.636 [2024-12-01 15:05:24.737102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:9600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:51.636 [2024-12-01 15:05:24.737109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:51.636 [2024-12-01 15:05:24.737118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:9608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:51.636 [2024-12-01 15:05:24.737125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:51.636 [2024-12-01 15:05:24.737134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:9616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:51.636 [2024-12-01 15:05:24.737153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:51.636 [2024-12-01 15:05:24.737162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:9624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:51.636 [2024-12-01 15:05:24.737172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:51.636 [2024-12-01 15:05:24.737181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:9632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:51.636 [2024-12-01 15:05:24.737195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:51.636 [2024-12-01 15:05:24.737204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:9640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:51.636 [2024-12-01 15:05:24.737212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:51.636 [2024-12-01 15:05:24.737221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:9648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:51.636 [2024-12-01 15:05:24.737228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:51.636 [2024-12-01 15:05:24.737237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:9656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:51.636 [2024-12-01 15:05:24.737245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:51.636 [2024-12-01 15:05:24.737254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:9664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:51.636 [2024-12-01 15:05:24.737261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:51.636 [2024-12-01 15:05:24.737270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:9672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:51.636 [2024-12-01 15:05:24.737277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:51.636 [2024-12-01 15:05:24.737286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:9680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:51.636 [2024-12-01 15:05:24.737294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:51.636 [2024-12-01 15:05:24.737303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:9064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:51.636 [2024-12-01 15:05:24.737317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:51.636 [2024-12-01 15:05:24.737325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:9072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:51.636 [2024-12-01 15:05:24.737333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:51.636 [2024-12-01 15:05:24.737342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:9080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:51.636 [2024-12-01 15:05:24.737350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:51.636 [2024-12-01 15:05:24.737358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:9088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:51.636 [2024-12-01 15:05:24.737366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:51.636 [2024-12-01 15:05:24.737375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:9104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:51.636 [2024-12-01 15:05:24.737382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:51.636 [2024-12-01 15:05:24.737391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:9112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:51.636 [2024-12-01 15:05:24.737399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:51.636 [2024-12-01 15:05:24.737407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:9120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:51.636 [2024-12-01 15:05:24.737415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:51.636 [2024-12-01 15:05:24.737424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:9136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:51.636 [2024-12-01 15:05:24.737431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:51.636 [2024-12-01 15:05:24.737439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:9688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:51.636 [2024-12-01 15:05:24.737447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:51.636 [2024-12-01 15:05:24.737455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:9696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:51.636 [2024-12-01 15:05:24.737462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:51.636 [2024-12-01 15:05:24.737471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:9704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:51.636 [2024-12-01 15:05:24.737479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:51.636 [2024-12-01 15:05:24.737487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:9712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:51.636 [2024-12-01 15:05:24.737494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:51.636 [2024-12-01 15:05:24.737503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:9720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:51.636 [2024-12-01 15:05:24.737510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:51.636 [2024-12-01 15:05:24.737519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:51.636 [2024-12-01 15:05:24.737526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:51.636 [2024-12-01 15:05:24.737540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:9736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:51.636 [2024-12-01 15:05:24.737547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:51.636 [2024-12-01 15:05:24.737556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:9744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:51.636 [2024-12-01 15:05:24.737563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:51.636 [2024-12-01 15:05:24.737572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:9752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:51.636 [2024-12-01 15:05:24.737613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:51.636 [2024-12-01 15:05:24.737623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:9760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:51.636 [2024-12-01 15:05:24.737631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:51.636 [2024-12-01 15:05:24.737640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:9768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:51.636 [2024-12-01 15:05:24.737648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:51.636 [2024-12-01 15:05:24.737657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:9776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:51.636 [2024-12-01 15:05:24.737664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:51.636 [2024-12-01 15:05:24.737673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:51.636 [2024-12-01 15:05:24.737681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:51.636 [2024-12-01 15:05:24.737690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:9792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:51.636 [2024-12-01 15:05:24.737697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:51.636 [2024-12-01 15:05:24.737707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:9800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:51.636 [2024-12-01 15:05:24.737714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:51.636 [2024-12-01 15:05:24.737723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:9808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:51.636 [2024-12-01 15:05:24.737731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:51.636 [2024-12-01 15:05:24.737740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:9816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:51.636 [2024-12-01 15:05:24.737747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:51.636 [2024-12-01 15:05:24.737757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:9824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:51.636 [2024-12-01 15:05:24.737764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:51.636 [2024-12-01 15:05:24.737782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:9144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:51.636 [2024-12-01 15:05:24.737790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:51.636 [2024-12-01 15:05:24.737799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:9152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:51.636 [2024-12-01 15:05:24.737807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:51.636 [2024-12-01 15:05:24.737817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:9168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:51.636 [2024-12-01 15:05:24.737824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:51.637 [2024-12-01 15:05:24.737834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:9176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:51.637 [2024-12-01 15:05:24.737847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:51.637 [2024-12-01 15:05:24.737861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:9184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:51.637 [2024-12-01 15:05:24.737870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:51.637 [2024-12-01 15:05:24.737879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:9200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:51.637 [2024-12-01 15:05:24.737887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:51.637 [2024-12-01 15:05:24.737897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:9208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:51.637 [2024-12-01 15:05:24.737909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:51.637 [2024-12-01 15:05:24.737918] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17cb660 is same with the state(5) to be set 00:24:51.637 [2024-12-01 15:05:24.737929] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:51.637 [2024-12-01 15:05:24.737936] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:51.637 [2024-12-01 15:05:24.737943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:9224 len:8 PRP1 0x0 PRP2 0x0 00:24:51.637 [2024-12-01 15:05:24.737950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:51.637 [2024-12-01 15:05:24.738015] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x17cb660 was disconnected and freed. reset controller. 00:24:51.637 [2024-12-01 15:05:24.738214] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:51.637 [2024-12-01 15:05:24.738276] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17468c0 (9): Bad file descriptor 00:24:51.637 [2024-12-01 15:05:24.738379] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:51.637 [2024-12-01 15:05:24.738431] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:51.637 [2024-12-01 15:05:24.738445] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17468c0 with addr=10.0.0.2, port=4420 00:24:51.637 [2024-12-01 15:05:24.738454] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17468c0 is same with the state(5) to be set 00:24:51.637 [2024-12-01 15:05:24.738469] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17468c0 (9): Bad file descriptor 00:24:51.637 [2024-12-01 15:05:24.738482] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:51.637 [2024-12-01 15:05:24.738490] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:51.637 [2024-12-01 15:05:24.738498] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:51.637 [2024-12-01 15:05:24.738514] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:51.637 [2024-12-01 15:05:24.738524] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:51.896 15:05:24 -- host/timeout.sh@90 -- # sleep 1 00:24:52.830 [2024-12-01 15:05:25.738586] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.830 [2024-12-01 15:05:25.738654] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.830 [2024-12-01 15:05:25.738669] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17468c0 with addr=10.0.0.2, port=4420 00:24:52.830 [2024-12-01 15:05:25.738679] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17468c0 is same with the state(5) to be set 00:24:52.830 [2024-12-01 15:05:25.738695] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17468c0 (9): Bad file descriptor 00:24:52.830 [2024-12-01 15:05:25.738709] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:52.830 [2024-12-01 15:05:25.738717] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:52.830 [2024-12-01 15:05:25.738725] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:52.830 [2024-12-01 15:05:25.738741] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:52.830 [2024-12-01 15:05:25.738750] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:52.830 15:05:25 -- host/timeout.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:53.088 [2024-12-01 15:05:26.013833] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:53.088 15:05:26 -- host/timeout.sh@92 -- # wait 100753 00:24:53.654 [2024-12-01 15:05:26.751982] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:25:01.777 00:25:01.777 Latency(us) 00:25:01.777 [2024-12-01T15:05:34.892Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:01.777 [2024-12-01T15:05:34.892Z] Job: NVMe0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:25:01.777 Verification LBA range: start 0x0 length 0x4000 00:25:01.777 NVMe0n1 : 10.01 10846.83 42.37 0.00 0.00 11786.29 1347.96 3019898.88 00:25:01.777 [2024-12-01T15:05:34.892Z] =================================================================================================================== 00:25:01.777 [2024-12-01T15:05:34.892Z] Total : 10846.83 42.37 0.00 0.00 11786.29 1347.96 3019898.88 00:25:01.777 0 00:25:01.777 15:05:33 -- host/timeout.sh@97 -- # rpc_pid=100870 00:25:01.777 15:05:33 -- host/timeout.sh@96 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:25:01.777 15:05:33 -- host/timeout.sh@98 -- # sleep 1 00:25:01.777 Running I/O for 10 seconds... 00:25:01.777 15:05:34 -- host/timeout.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:01.777 [2024-12-01 15:05:34.859042] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d4110 is same with the state(5) to be set 00:25:01.777 [2024-12-01 15:05:34.859113] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d4110 is same with the state(5) to be set 00:25:01.777 [2024-12-01 15:05:34.859139] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d4110 is same with the state(5) to be set 00:25:01.777 [2024-12-01 15:05:34.859162] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d4110 is same with the state(5) to be set 00:25:01.777 [2024-12-01 15:05:34.859184] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d4110 is same with the state(5) to be set 00:25:01.777 [2024-12-01 15:05:34.859207] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d4110 is same with the state(5) to be set 00:25:01.777 [2024-12-01 15:05:34.859215] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d4110 is same with the state(5) to be set 00:25:01.777 [2024-12-01 15:05:34.859222] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d4110 is same with the state(5) to be set 00:25:01.777 [2024-12-01 15:05:34.859230] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d4110 is same with the state(5) to be set 00:25:01.777 [2024-12-01 15:05:34.859237] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d4110 is same with the state(5) to be set 00:25:01.777 [2024-12-01 15:05:34.859244] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d4110 is same with the state(5) to be set 00:25:01.777 [2024-12-01 15:05:34.859251] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d4110 is same with the state(5) to be set 00:25:01.777 [2024-12-01 15:05:34.859258] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d4110 is same with the state(5) to be set 00:25:01.778 [2024-12-01 15:05:34.859264] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d4110 is same with the state(5) to be set 00:25:01.778 [2024-12-01 15:05:34.859271] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d4110 is same with the state(5) to be set 00:25:01.778 [2024-12-01 15:05:34.859278] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d4110 is same with the state(5) to be set 00:25:01.778 [2024-12-01 15:05:34.859285] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d4110 is same with the state(5) to be set 00:25:01.778 [2024-12-01 15:05:34.859292] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d4110 is same with the state(5) to be set 00:25:01.778 [2024-12-01 15:05:34.859299] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d4110 is same with the state(5) to be set 00:25:01.778 [2024-12-01 15:05:34.859305] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d4110 is same with the state(5) to be set 00:25:01.778 [2024-12-01 15:05:34.859311] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d4110 is same with the state(5) to be set 00:25:01.778 [2024-12-01 15:05:34.859321] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d4110 is same with the state(5) to be set 00:25:01.778 [2024-12-01 15:05:34.859328] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d4110 is same with the state(5) to be set 00:25:01.778 [2024-12-01 15:05:34.859336] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d4110 is same with the state(5) to be set 00:25:01.778 [2024-12-01 15:05:34.859343] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d4110 is same with the state(5) to be set 00:25:01.778 [2024-12-01 15:05:34.859350] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d4110 is same with the state(5) to be set 00:25:01.778 [2024-12-01 15:05:34.859357] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d4110 is same with the state(5) to be set 00:25:01.778 [2024-12-01 15:05:34.859380] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d4110 is same with the state(5) to be set 00:25:01.778 [2024-12-01 15:05:34.859387] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d4110 is same with the state(5) to be set 00:25:01.778 [2024-12-01 15:05:34.859394] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d4110 is same with the state(5) to be set 00:25:01.778 [2024-12-01 15:05:34.859418] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d4110 is same with the state(5) to be set 00:25:01.778 [2024-12-01 15:05:34.859425] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d4110 is same with the state(5) to be set 00:25:01.778 [2024-12-01 15:05:34.859433] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d4110 is same with the state(5) to be set 00:25:01.778 [2024-12-01 15:05:34.859441] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d4110 is same with the state(5) to be set 00:25:01.778 [2024-12-01 15:05:34.859448] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d4110 is same with the state(5) to be set 00:25:01.778 [2024-12-01 15:05:34.859456] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d4110 is same with the state(5) to be set 00:25:01.778 [2024-12-01 15:05:34.859463] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d4110 is same with the state(5) to be set 00:25:01.778 [2024-12-01 15:05:34.859471] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d4110 is same with the state(5) to be set 00:25:01.778 [2024-12-01 15:05:34.859480] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d4110 is same with the state(5) to be set 00:25:01.778 [2024-12-01 15:05:34.859487] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d4110 is same with the state(5) to be set 00:25:01.778 [2024-12-01 15:05:34.859495] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d4110 is same with the state(5) to be set 00:25:01.778 [2024-12-01 15:05:34.859503] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d4110 is same with the state(5) to be set 00:25:01.778 [2024-12-01 15:05:34.859510] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d4110 is same with the state(5) to be set 00:25:01.778 [2024-12-01 15:05:34.859517] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d4110 is same with the state(5) to be set 00:25:01.778 [2024-12-01 15:05:34.859525] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d4110 is same with the state(5) to be set 00:25:01.778 [2024-12-01 15:05:34.859532] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d4110 is same with the state(5) to be set 00:25:01.778 [2024-12-01 15:05:34.859539] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d4110 is same with the state(5) to be set 00:25:01.778 [2024-12-01 15:05:34.859547] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d4110 is same with the state(5) to be set 00:25:01.778 [2024-12-01 15:05:34.859554] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d4110 is same with the state(5) to be set 00:25:01.778 [2024-12-01 15:05:34.859561] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d4110 is same with the state(5) to be set 00:25:01.778 [2024-12-01 15:05:34.859568] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d4110 is same with the state(5) to be set 00:25:01.778 [2024-12-01 15:05:34.859575] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d4110 is same with the state(5) to be set 00:25:01.778 [2024-12-01 15:05:34.859582] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d4110 is same with the state(5) to be set 00:25:01.778 [2024-12-01 15:05:34.859588] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d4110 is same with the state(5) to be set 00:25:01.778 [2024-12-01 15:05:34.859595] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d4110 is same with the state(5) to be set 00:25:01.778 [2024-12-01 15:05:34.859603] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d4110 is same with the state(5) to be set 00:25:01.778 [2024-12-01 15:05:34.859611] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d4110 is same with the state(5) to be set 00:25:01.778 [2024-12-01 15:05:34.859618] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d4110 is same with the state(5) to be set 00:25:01.778 [2024-12-01 15:05:34.859626] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d4110 is same with the state(5) to be set 00:25:01.778 [2024-12-01 15:05:34.859633] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d4110 is same with the state(5) to be set 00:25:01.778 [2024-12-01 15:05:34.859641] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d4110 is same with the state(5) to be set 00:25:01.778 [2024-12-01 15:05:34.859649] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d4110 is same with the state(5) to be set 00:25:01.778 [2024-12-01 15:05:34.859657] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d4110 is same with the state(5) to be set 00:25:01.778 [2024-12-01 15:05:34.859664] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d4110 is same with the state(5) to be set 00:25:01.778 [2024-12-01 15:05:34.859672] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d4110 is same with the state(5) to be set 00:25:01.778 [2024-12-01 15:05:34.859682] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d4110 is same with the state(5) to be set 00:25:01.778 [2024-12-01 15:05:34.859690] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d4110 is same with the state(5) to be set 00:25:01.778 [2024-12-01 15:05:34.859706] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d4110 is same with the state(5) to be set 00:25:01.778 [2024-12-01 15:05:34.859714] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d4110 is same with the state(5) to be set 00:25:01.778 [2024-12-01 15:05:34.859722] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d4110 is same with the state(5) to be set 00:25:01.778 [2024-12-01 15:05:34.859729] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d4110 is same with the state(5) to be set 00:25:01.778 [2024-12-01 15:05:34.860330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:126696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.778 [2024-12-01 15:05:34.860364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.778 [2024-12-01 15:05:34.860380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:126704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.778 [2024-12-01 15:05:34.860389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.778 [2024-12-01 15:05:34.860399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:126736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.778 [2024-12-01 15:05:34.860406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.778 [2024-12-01 15:05:34.860415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:126760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.778 [2024-12-01 15:05:34.860423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.778 [2024-12-01 15:05:34.860432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:126768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.778 [2024-12-01 15:05:34.860440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.778 [2024-12-01 15:05:34.860449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:126776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.778 [2024-12-01 15:05:34.860456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.778 [2024-12-01 15:05:34.860465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:126784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.778 [2024-12-01 15:05:34.860472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.778 [2024-12-01 15:05:34.860481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:126792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.778 [2024-12-01 15:05:34.860488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.778 [2024-12-01 15:05:34.860497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:126808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.778 [2024-12-01 15:05:34.860504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.778 [2024-12-01 15:05:34.860513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:126816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.778 [2024-12-01 15:05:34.860520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.778 [2024-12-01 15:05:34.860529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:126824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.778 [2024-12-01 15:05:34.860537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.778 [2024-12-01 15:05:34.860545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:126856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.778 [2024-12-01 15:05:34.860553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.778 [2024-12-01 15:05:34.860562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:126888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.779 [2024-12-01 15:05:34.860570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.779 [2024-12-01 15:05:34.860580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:126192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.779 [2024-12-01 15:05:34.860588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.779 [2024-12-01 15:05:34.860596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:126216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.779 [2024-12-01 15:05:34.860604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.779 [2024-12-01 15:05:34.860612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:126224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.779 [2024-12-01 15:05:34.860620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.779 [2024-12-01 15:05:34.860629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:126240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.779 [2024-12-01 15:05:34.860638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.779 [2024-12-01 15:05:34.860648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:126304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.779 [2024-12-01 15:05:34.860655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.779 [2024-12-01 15:05:34.860664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:126320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.779 [2024-12-01 15:05:34.860672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.779 [2024-12-01 15:05:34.860681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:126376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.779 [2024-12-01 15:05:34.860689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.779 [2024-12-01 15:05:34.860698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:126392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.779 [2024-12-01 15:05:34.860705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.779 [2024-12-01 15:05:34.860714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:126400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.779 [2024-12-01 15:05:34.860721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.779 [2024-12-01 15:05:34.860730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:126424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.779 [2024-12-01 15:05:34.860737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.779 [2024-12-01 15:05:34.860746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:126440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.779 [2024-12-01 15:05:34.860773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.779 [2024-12-01 15:05:34.860784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:126448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.779 [2024-12-01 15:05:34.860792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.779 [2024-12-01 15:05:34.860803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:126456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.779 [2024-12-01 15:05:34.860811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.779 [2024-12-01 15:05:34.860820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:126464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.779 [2024-12-01 15:05:34.860828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.779 [2024-12-01 15:05:34.860844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:126472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.779 [2024-12-01 15:05:34.860852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.779 [2024-12-01 15:05:34.860861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:126496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.779 [2024-12-01 15:05:34.860868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.779 [2024-12-01 15:05:34.860876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:126896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.779 [2024-12-01 15:05:34.860884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.779 [2024-12-01 15:05:34.860893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:126904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.779 [2024-12-01 15:05:34.860900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.779 [2024-12-01 15:05:34.860909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:126920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.779 [2024-12-01 15:05:34.860916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.779 [2024-12-01 15:05:34.860925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:126928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.779 [2024-12-01 15:05:34.860932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.779 [2024-12-01 15:05:34.860942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:126936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.779 [2024-12-01 15:05:34.860949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.779 [2024-12-01 15:05:34.860958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:126944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.779 [2024-12-01 15:05:34.860965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.779 [2024-12-01 15:05:34.860974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:126952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.779 [2024-12-01 15:05:34.860981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.779 [2024-12-01 15:05:34.860990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:126968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.779 [2024-12-01 15:05:34.860997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.779 [2024-12-01 15:05:34.861006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:126984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.779 [2024-12-01 15:05:34.861014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.779 [2024-12-01 15:05:34.861023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:126992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.779 [2024-12-01 15:05:34.861030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.779 [2024-12-01 15:05:34.861039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:127000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.779 [2024-12-01 15:05:34.861047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.779 [2024-12-01 15:05:34.861056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:127008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.779 [2024-12-01 15:05:34.861063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.779 [2024-12-01 15:05:34.861072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:127016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.779 [2024-12-01 15:05:34.861079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.779 [2024-12-01 15:05:34.861088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:127024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.779 [2024-12-01 15:05:34.861095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.779 [2024-12-01 15:05:34.861112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:127032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.779 [2024-12-01 15:05:34.861120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.779 [2024-12-01 15:05:34.861133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:127040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.779 [2024-12-01 15:05:34.861141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.779 [2024-12-01 15:05:34.861152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:127048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.779 [2024-12-01 15:05:34.861160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.779 [2024-12-01 15:05:34.861169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:127056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.779 [2024-12-01 15:05:34.861176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.779 [2024-12-01 15:05:34.861185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:127064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.779 [2024-12-01 15:05:34.861199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.779 [2024-12-01 15:05:34.861207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:127072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.779 [2024-12-01 15:05:34.861215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.779 [2024-12-01 15:05:34.861224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:127080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.779 [2024-12-01 15:05:34.861231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.779 [2024-12-01 15:05:34.861240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:127088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.779 [2024-12-01 15:05:34.861247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.779 [2024-12-01 15:05:34.861256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:127096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.779 [2024-12-01 15:05:34.861263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.780 [2024-12-01 15:05:34.861272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:126552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.780 [2024-12-01 15:05:34.861279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.780 [2024-12-01 15:05:34.861288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:126576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.780 [2024-12-01 15:05:34.861295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.780 [2024-12-01 15:05:34.861303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:126608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.780 [2024-12-01 15:05:34.861310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.780 [2024-12-01 15:05:34.861319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:126624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.780 [2024-12-01 15:05:34.861326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.780 [2024-12-01 15:05:34.861335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:126632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.780 [2024-12-01 15:05:34.861342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.780 [2024-12-01 15:05:34.861351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:126656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.780 [2024-12-01 15:05:34.861358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.780 [2024-12-01 15:05:34.861368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:126664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.780 [2024-12-01 15:05:34.861375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.780 [2024-12-01 15:05:34.861388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:126672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.780 [2024-12-01 15:05:34.861395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.780 [2024-12-01 15:05:34.861404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:127104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.780 [2024-12-01 15:05:34.861411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.780 [2024-12-01 15:05:34.861420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:127112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.780 [2024-12-01 15:05:34.861428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.780 [2024-12-01 15:05:34.861436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:127120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.780 [2024-12-01 15:05:34.861444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.780 [2024-12-01 15:05:34.861452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:127128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.780 [2024-12-01 15:05:34.861459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.780 [2024-12-01 15:05:34.861468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:127136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.780 [2024-12-01 15:05:34.861476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.780 [2024-12-01 15:05:34.861485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:127144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.780 [2024-12-01 15:05:34.861492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.780 [2024-12-01 15:05:34.861501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:127152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.780 [2024-12-01 15:05:34.861509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.780 [2024-12-01 15:05:34.861518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:127160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.780 [2024-12-01 15:05:34.861525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.780 [2024-12-01 15:05:34.861534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:127168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.780 [2024-12-01 15:05:34.861541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.780 [2024-12-01 15:05:34.861549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:127176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.780 [2024-12-01 15:05:34.861556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.780 [2024-12-01 15:05:34.861565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:127184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.780 [2024-12-01 15:05:34.861573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.780 [2024-12-01 15:05:34.861581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:127192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.780 [2024-12-01 15:05:34.861612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.780 [2024-12-01 15:05:34.861629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:127200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.780 [2024-12-01 15:05:34.861637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.780 [2024-12-01 15:05:34.861647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:127208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.780 [2024-12-01 15:05:34.861657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.780 [2024-12-01 15:05:34.861667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:127216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.780 [2024-12-01 15:05:34.861674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.780 [2024-12-01 15:05:34.861688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:127224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.780 [2024-12-01 15:05:34.861706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.780 [2024-12-01 15:05:34.861716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:127232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.780 [2024-12-01 15:05:34.861723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.780 [2024-12-01 15:05:34.861733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:127240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.780 [2024-12-01 15:05:34.861740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.780 [2024-12-01 15:05:34.861750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:127248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.780 [2024-12-01 15:05:34.861759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.780 [2024-12-01 15:05:34.861777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:127256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.780 [2024-12-01 15:05:34.861786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.780 [2024-12-01 15:05:34.861796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:127264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.780 [2024-12-01 15:05:34.861807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.780 [2024-12-01 15:05:34.861817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:127272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.780 [2024-12-01 15:05:34.861824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.780 [2024-12-01 15:05:34.861833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:127280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.780 [2024-12-01 15:05:34.861841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.780 [2024-12-01 15:05:34.861850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:127288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.780 [2024-12-01 15:05:34.861857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.780 [2024-12-01 15:05:34.861867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:127296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.780 [2024-12-01 15:05:34.861874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.780 [2024-12-01 15:05:34.861883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:127304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.780 [2024-12-01 15:05:34.861891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.780 [2024-12-01 15:05:34.861901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:127312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.780 [2024-12-01 15:05:34.861908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.780 [2024-12-01 15:05:34.861917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:126688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.780 [2024-12-01 15:05:34.861925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.780 [2024-12-01 15:05:34.861943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:126712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.780 [2024-12-01 15:05:34.861950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.780 [2024-12-01 15:05:34.861959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:126720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.780 [2024-12-01 15:05:34.861967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.780 [2024-12-01 15:05:34.861976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:126728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.780 [2024-12-01 15:05:34.861983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.780 [2024-12-01 15:05:34.861996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:126744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.781 [2024-12-01 15:05:34.862004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.781 [2024-12-01 15:05:34.862013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:126752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.781 [2024-12-01 15:05:34.862021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.781 [2024-12-01 15:05:34.862030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:126800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.781 [2024-12-01 15:05:34.862054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.781 [2024-12-01 15:05:34.862062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:126832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.781 [2024-12-01 15:05:34.862069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.781 [2024-12-01 15:05:34.862085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:127320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.781 [2024-12-01 15:05:34.862093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.781 [2024-12-01 15:05:34.862102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:127328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.781 [2024-12-01 15:05:34.862114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.781 [2024-12-01 15:05:34.862122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:127336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.781 [2024-12-01 15:05:34.862130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.781 [2024-12-01 15:05:34.862139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:127344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.781 [2024-12-01 15:05:34.862146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.781 [2024-12-01 15:05:34.862155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:127352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.781 [2024-12-01 15:05:34.862162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.781 [2024-12-01 15:05:34.862170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:127360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.781 [2024-12-01 15:05:34.862178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.781 [2024-12-01 15:05:34.862186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:127368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.781 [2024-12-01 15:05:34.862193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.781 [2024-12-01 15:05:34.862202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:127376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.781 [2024-12-01 15:05:34.862209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.781 [2024-12-01 15:05:34.862218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:127384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.781 [2024-12-01 15:05:34.862225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.781 [2024-12-01 15:05:34.862234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:127392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.781 [2024-12-01 15:05:34.862241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.781 [2024-12-01 15:05:34.862249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:127400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.781 [2024-12-01 15:05:34.862257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.781 [2024-12-01 15:05:34.862266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:127408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.781 [2024-12-01 15:05:34.862273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.781 [2024-12-01 15:05:34.862286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:127416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.781 [2024-12-01 15:05:34.862294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.781 [2024-12-01 15:05:34.862303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:127424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.781 [2024-12-01 15:05:34.862310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.781 [2024-12-01 15:05:34.862319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:127432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.781 [2024-12-01 15:05:34.862326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.781 [2024-12-01 15:05:34.862335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:127440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.781 [2024-12-01 15:05:34.862342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.781 [2024-12-01 15:05:34.862350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:127448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.781 [2024-12-01 15:05:34.862357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.781 [2024-12-01 15:05:34.862366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:127456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.781 [2024-12-01 15:05:34.862378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.781 [2024-12-01 15:05:34.862386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:127464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.781 [2024-12-01 15:05:34.862394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.781 [2024-12-01 15:05:34.862403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:127472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.781 [2024-12-01 15:05:34.862410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.781 [2024-12-01 15:05:34.862419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:127480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.781 [2024-12-01 15:05:34.862426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.781 [2024-12-01 15:05:34.862435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:127488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.781 [2024-12-01 15:05:34.862442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.781 [2024-12-01 15:05:34.862456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:127496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.781 [2024-12-01 15:05:34.862463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.781 [2024-12-01 15:05:34.862472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:127504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.781 [2024-12-01 15:05:34.862479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.781 [2024-12-01 15:05:34.862488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:127512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.781 [2024-12-01 15:05:34.862495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.781 [2024-12-01 15:05:34.862504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:126840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.781 [2024-12-01 15:05:34.862511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.781 [2024-12-01 15:05:34.862520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:126848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.781 [2024-12-01 15:05:34.862527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.781 [2024-12-01 15:05:34.862535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:126864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.781 [2024-12-01 15:05:34.862542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.781 [2024-12-01 15:05:34.862555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:126872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.781 [2024-12-01 15:05:34.862562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.781 [2024-12-01 15:05:34.862571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:126880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.781 [2024-12-01 15:05:34.862578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.781 [2024-12-01 15:05:34.862587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:126912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.781 [2024-12-01 15:05:34.862594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.781 [2024-12-01 15:05:34.862603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:126960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.781 [2024-12-01 15:05:34.862610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.781 [2024-12-01 15:05:34.862618] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17971d0 is same with the state(5) to be set 00:25:01.781 [2024-12-01 15:05:34.862627] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:01.781 [2024-12-01 15:05:34.862634] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:01.781 [2024-12-01 15:05:34.862645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:126976 len:8 PRP1 0x0 PRP2 0x0 00:25:01.781 [2024-12-01 15:05:34.862652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.781 [2024-12-01 15:05:34.862680] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x17971d0 was disconnected and freed. reset controller. 00:25:01.781 [2024-12-01 15:05:34.862739] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:01.781 [2024-12-01 15:05:34.862762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.781 [2024-12-01 15:05:34.862775] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:01.782 [2024-12-01 15:05:34.862783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.782 [2024-12-01 15:05:34.862796] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:01.782 [2024-12-01 15:05:34.862803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.782 [2024-12-01 15:05:34.862811] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:25:01.782 [2024-12-01 15:05:34.862818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.782 [2024-12-01 15:05:34.862825] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17468c0 is same with the state(5) to be set 00:25:01.782 [2024-12-01 15:05:34.863003] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:01.782 [2024-12-01 15:05:34.863023] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17468c0 (9): Bad file descriptor 00:25:01.782 [2024-12-01 15:05:34.863090] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.782 [2024-12-01 15:05:34.863128] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.782 [2024-12-01 15:05:34.863148] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17468c0 with addr=10.0.0.2, port=4420 00:25:01.782 [2024-12-01 15:05:34.863156] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17468c0 is same with the state(5) to be set 00:25:01.782 [2024-12-01 15:05:34.863171] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17468c0 (9): Bad file descriptor 00:25:01.782 [2024-12-01 15:05:34.863184] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:01.782 [2024-12-01 15:05:34.863192] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:01.782 [2024-12-01 15:05:34.863206] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:01.782 [2024-12-01 15:05:34.877850] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:01.782 [2024-12-01 15:05:34.877878] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:02.040 15:05:34 -- host/timeout.sh@101 -- # sleep 3 00:25:02.975 [2024-12-01 15:05:35.877962] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.975 [2024-12-01 15:05:35.878060] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.975 [2024-12-01 15:05:35.878076] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17468c0 with addr=10.0.0.2, port=4420 00:25:02.975 [2024-12-01 15:05:35.878086] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17468c0 is same with the state(5) to be set 00:25:02.975 [2024-12-01 15:05:35.878103] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17468c0 (9): Bad file descriptor 00:25:02.975 [2024-12-01 15:05:35.878117] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:02.975 [2024-12-01 15:05:35.878126] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:02.975 [2024-12-01 15:05:35.878133] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:02.975 [2024-12-01 15:05:35.878149] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:02.975 [2024-12-01 15:05:35.878158] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:03.938 [2024-12-01 15:05:36.878221] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.938 [2024-12-01 15:05:36.878294] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.939 [2024-12-01 15:05:36.878309] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17468c0 with addr=10.0.0.2, port=4420 00:25:03.939 [2024-12-01 15:05:36.878319] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17468c0 is same with the state(5) to be set 00:25:03.939 [2024-12-01 15:05:36.878335] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17468c0 (9): Bad file descriptor 00:25:03.939 [2024-12-01 15:05:36.878349] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:03.939 [2024-12-01 15:05:36.878357] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:03.939 [2024-12-01 15:05:36.878365] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:03.939 [2024-12-01 15:05:36.878382] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:03.939 [2024-12-01 15:05:36.878392] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:04.889 [2024-12-01 15:05:37.878659] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.889 [2024-12-01 15:05:37.878733] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.889 [2024-12-01 15:05:37.878749] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17468c0 with addr=10.0.0.2, port=4420 00:25:04.889 [2024-12-01 15:05:37.878771] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17468c0 is same with the state(5) to be set 00:25:04.889 [2024-12-01 15:05:37.878916] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17468c0 (9): Bad file descriptor 00:25:04.889 [2024-12-01 15:05:37.878993] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:04.889 [2024-12-01 15:05:37.879012] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:04.889 [2024-12-01 15:05:37.879020] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:04.889 [2024-12-01 15:05:37.880919] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:04.889 [2024-12-01 15:05:37.880943] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:04.889 15:05:37 -- host/timeout.sh@102 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:05.147 [2024-12-01 15:05:38.146436] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:05.148 15:05:38 -- host/timeout.sh@103 -- # wait 100870 00:25:06.083 [2024-12-01 15:05:38.906351] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:25:11.352 00:25:11.352 Latency(us) 00:25:11.352 [2024-12-01T15:05:44.467Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:11.352 [2024-12-01T15:05:44.467Z] Job: NVMe0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:25:11.352 Verification LBA range: start 0x0 length 0x4000 00:25:11.352 NVMe0n1 : 10.01 8583.44 33.53 7475.65 0.00 7959.34 670.25 3019898.88 00:25:11.352 [2024-12-01T15:05:44.467Z] =================================================================================================================== 00:25:11.352 [2024-12-01T15:05:44.467Z] Total : 8583.44 33.53 7475.65 0.00 7959.34 0.00 3019898.88 00:25:11.352 0 00:25:11.352 15:05:43 -- host/timeout.sh@105 -- # killprocess 100704 00:25:11.352 15:05:43 -- common/autotest_common.sh@936 -- # '[' -z 100704 ']' 00:25:11.352 15:05:43 -- common/autotest_common.sh@940 -- # kill -0 100704 00:25:11.352 15:05:43 -- common/autotest_common.sh@941 -- # uname 00:25:11.352 15:05:43 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:25:11.352 15:05:43 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 100704 00:25:11.352 killing process with pid 100704 00:25:11.352 Received shutdown signal, test time was about 10.000000 seconds 00:25:11.352 00:25:11.352 Latency(us) 00:25:11.352 [2024-12-01T15:05:44.467Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:11.352 [2024-12-01T15:05:44.467Z] =================================================================================================================== 00:25:11.352 [2024-12-01T15:05:44.467Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:25:11.352 15:05:43 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:25:11.352 15:05:43 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:25:11.352 15:05:43 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 100704' 00:25:11.352 15:05:43 -- common/autotest_common.sh@955 -- # kill 100704 00:25:11.352 15:05:43 -- common/autotest_common.sh@960 -- # wait 100704 00:25:11.352 15:05:43 -- host/timeout.sh@110 -- # bdevperf_pid=101000 00:25:11.352 15:05:43 -- host/timeout.sh@109 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w randread -t 10 -f 00:25:11.352 15:05:43 -- host/timeout.sh@112 -- # waitforlisten 101000 /var/tmp/bdevperf.sock 00:25:11.352 15:05:43 -- common/autotest_common.sh@829 -- # '[' -z 101000 ']' 00:25:11.352 15:05:43 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:25:11.352 15:05:43 -- common/autotest_common.sh@834 -- # local max_retries=100 00:25:11.352 15:05:43 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:25:11.352 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:25:11.352 15:05:43 -- common/autotest_common.sh@838 -- # xtrace_disable 00:25:11.352 15:05:43 -- common/autotest_common.sh@10 -- # set +x 00:25:11.352 [2024-12-01 15:05:44.024392] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:25:11.352 [2024-12-01 15:05:44.024685] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid101000 ] 00:25:11.352 [2024-12-01 15:05:44.156419] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:11.352 [2024-12-01 15:05:44.202195] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:25:11.919 15:05:45 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:25:11.919 15:05:45 -- common/autotest_common.sh@862 -- # return 0 00:25:11.919 15:05:45 -- host/timeout.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 101000 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_timeout.bt 00:25:11.919 15:05:45 -- host/timeout.sh@116 -- # dtrace_pid=101029 00:25:11.919 15:05:45 -- host/timeout.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 -e 9 00:25:12.492 15:05:45 -- host/timeout.sh@120 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 --ctrlr-loss-timeout-sec 5 --reconnect-delay-sec 2 00:25:12.492 NVMe0n1 00:25:12.492 15:05:45 -- host/timeout.sh@123 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:25:12.492 15:05:45 -- host/timeout.sh@124 -- # rpc_pid=101085 00:25:12.492 15:05:45 -- host/timeout.sh@125 -- # sleep 1 00:25:12.750 Running I/O for 10 seconds... 00:25:13.684 15:05:46 -- host/timeout.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:13.948 [2024-12-01 15:05:46.888040] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d7ba0 is same with the state(5) to be set 00:25:13.948 [2024-12-01 15:05:46.888504] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d7ba0 is same with the state(5) to be set 00:25:13.948 [2024-12-01 15:05:46.888599] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d7ba0 is same with the state(5) to be set 00:25:13.948 [2024-12-01 15:05:46.888653] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d7ba0 is same with the state(5) to be set 00:25:13.948 [2024-12-01 15:05:46.888709] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d7ba0 is same with the state(5) to be set 00:25:13.948 [2024-12-01 15:05:46.888796] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d7ba0 is same with the state(5) to be set 00:25:13.948 [2024-12-01 15:05:46.888851] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d7ba0 is same with the state(5) to be set 00:25:13.948 [2024-12-01 15:05:46.888924] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d7ba0 is same with the state(5) to be set 00:25:13.948 [2024-12-01 15:05:46.888988] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d7ba0 is same with the state(5) to be set 00:25:13.948 [2024-12-01 15:05:46.889050] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d7ba0 is same with the state(5) to be set 00:25:13.948 [2024-12-01 15:05:46.889111] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d7ba0 is same with the state(5) to be set 00:25:13.948 [2024-12-01 15:05:46.889199] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d7ba0 is same with the state(5) to be set 00:25:13.948 [2024-12-01 15:05:46.889276] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d7ba0 is same with the state(5) to be set 00:25:13.948 [2024-12-01 15:05:46.889333] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d7ba0 is same with the state(5) to be set 00:25:13.948 [2024-12-01 15:05:46.889392] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d7ba0 is same with the state(5) to be set 00:25:13.948 [2024-12-01 15:05:46.889464] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d7ba0 is same with the state(5) to be set 00:25:13.948 [2024-12-01 15:05:46.889523] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d7ba0 is same with the state(5) to be set 00:25:13.948 [2024-12-01 15:05:46.889601] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d7ba0 is same with the state(5) to be set 00:25:13.948 [2024-12-01 15:05:46.889692] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d7ba0 is same with the state(5) to be set 00:25:13.948 [2024-12-01 15:05:46.889763] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d7ba0 is same with the state(5) to be set 00:25:13.948 [2024-12-01 15:05:46.889859] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d7ba0 is same with the state(5) to be set 00:25:13.948 [2024-12-01 15:05:46.889928] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d7ba0 is same with the state(5) to be set 00:25:13.948 [2024-12-01 15:05:46.889986] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d7ba0 is same with the state(5) to be set 00:25:13.948 [2024-12-01 15:05:46.890030] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d7ba0 is same with the state(5) to be set 00:25:13.948 [2024-12-01 15:05:46.890084] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d7ba0 is same with the state(5) to be set 00:25:13.948 [2024-12-01 15:05:46.890131] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d7ba0 is same with the state(5) to be set 00:25:13.948 [2024-12-01 15:05:46.890174] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d7ba0 is same with the state(5) to be set 00:25:13.949 [2024-12-01 15:05:46.890262] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d7ba0 is same with the state(5) to be set 00:25:13.949 [2024-12-01 15:05:46.890329] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d7ba0 is same with the state(5) to be set 00:25:13.949 [2024-12-01 15:05:46.890378] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d7ba0 is same with the state(5) to be set 00:25:13.949 [2024-12-01 15:05:46.890445] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d7ba0 is same with the state(5) to be set 00:25:13.949 [2024-12-01 15:05:46.890501] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d7ba0 is same with the state(5) to be set 00:25:13.949 [2024-12-01 15:05:46.890555] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d7ba0 is same with the state(5) to be set 00:25:13.949 [2024-12-01 15:05:46.890623] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d7ba0 is same with the state(5) to be set 00:25:13.949 [2024-12-01 15:05:46.890681] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d7ba0 is same with the state(5) to be set 00:25:13.949 [2024-12-01 15:05:46.890756] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d7ba0 is same with the state(5) to be set 00:25:13.949 [2024-12-01 15:05:46.890863] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d7ba0 is same with the state(5) to be set 00:25:13.949 [2024-12-01 15:05:46.890935] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d7ba0 is same with the state(5) to be set 00:25:13.949 [2024-12-01 15:05:46.890999] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d7ba0 is same with the state(5) to be set 00:25:13.949 [2024-12-01 15:05:46.891061] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d7ba0 is same with the state(5) to be set 00:25:13.949 [2024-12-01 15:05:46.891114] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d7ba0 is same with the state(5) to be set 00:25:13.949 [2024-12-01 15:05:46.891184] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d7ba0 is same with the state(5) to be set 00:25:13.949 [2024-12-01 15:05:46.891236] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d7ba0 is same with the state(5) to be set 00:25:13.949 [2024-12-01 15:05:46.891299] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d7ba0 is same with the state(5) to be set 00:25:13.949 [2024-12-01 15:05:46.891342] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d7ba0 is same with the state(5) to be set 00:25:13.949 [2024-12-01 15:05:46.891384] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d7ba0 is same with the state(5) to be set 00:25:13.949 [2024-12-01 15:05:46.891435] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d7ba0 is same with the state(5) to be set 00:25:13.949 [2024-12-01 15:05:46.891490] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d7ba0 is same with the state(5) to be set 00:25:13.949 [2024-12-01 15:05:46.891548] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d7ba0 is same with the state(5) to be set 00:25:13.949 [2024-12-01 15:05:46.891603] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d7ba0 is same with the state(5) to be set 00:25:13.949 [2024-12-01 15:05:46.891657] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d7ba0 is same with the state(5) to be set 00:25:13.949 [2024-12-01 15:05:46.891711] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d7ba0 is same with the state(5) to be set 00:25:13.949 [2024-12-01 15:05:46.891766] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d7ba0 is same with the state(5) to be set 00:25:13.949 [2024-12-01 15:05:46.891844] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d7ba0 is same with the state(5) to be set 00:25:13.949 [2024-12-01 15:05:46.891908] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d7ba0 is same with the state(5) to be set 00:25:13.949 [2024-12-01 15:05:46.891964] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d7ba0 is same with the state(5) to be set 00:25:13.949 [2024-12-01 15:05:46.892023] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d7ba0 is same with the state(5) to be set 00:25:13.949 [2024-12-01 15:05:46.892093] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d7ba0 is same with the state(5) to be set 00:25:13.949 [2024-12-01 15:05:46.892186] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d7ba0 is same with the state(5) to be set 00:25:13.949 [2024-12-01 15:05:46.892258] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d7ba0 is same with the state(5) to be set 00:25:13.949 [2024-12-01 15:05:46.892319] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d7ba0 is same with the state(5) to be set 00:25:13.949 [2024-12-01 15:05:46.892387] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d7ba0 is same with the state(5) to be set 00:25:13.949 [2024-12-01 15:05:46.892464] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d7ba0 is same with the state(5) to be set 00:25:13.949 [2024-12-01 15:05:46.892533] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d7ba0 is same with the state(5) to be set 00:25:13.949 [2024-12-01 15:05:46.892596] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d7ba0 is same with the state(5) to be set 00:25:13.949 [2024-12-01 15:05:46.892656] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d7ba0 is same with the state(5) to be set 00:25:13.949 [2024-12-01 15:05:46.892708] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d7ba0 is same with the state(5) to be set 00:25:13.949 [2024-12-01 15:05:46.892771] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d7ba0 is same with the state(5) to be set 00:25:13.949 [2024-12-01 15:05:46.892863] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d7ba0 is same with the state(5) to be set 00:25:13.949 [2024-12-01 15:05:46.892942] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d7ba0 is same with the state(5) to be set 00:25:13.949 [2024-12-01 15:05:46.892997] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d7ba0 is same with the state(5) to be set 00:25:13.949 [2024-12-01 15:05:46.893055] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d7ba0 is same with the state(5) to be set 00:25:13.949 [2024-12-01 15:05:46.893101] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d7ba0 is same with the state(5) to be set 00:25:13.949 [2024-12-01 15:05:46.893152] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d7ba0 is same with the state(5) to be set 00:25:13.949 [2024-12-01 15:05:46.893196] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d7ba0 is same with the state(5) to be set 00:25:13.949 [2024-12-01 15:05:46.893240] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d7ba0 is same with the state(5) to be set 00:25:13.949 [2024-12-01 15:05:46.893281] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d7ba0 is same with the state(5) to be set 00:25:13.949 [2024-12-01 15:05:46.893330] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d7ba0 is same with the state(5) to be set 00:25:13.949 [2024-12-01 15:05:46.893383] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d7ba0 is same with the state(5) to be set 00:25:13.949 [2024-12-01 15:05:46.893429] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d7ba0 is same with the state(5) to be set 00:25:13.949 [2024-12-01 15:05:46.893496] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d7ba0 is same with the state(5) to be set 00:25:13.949 [2024-12-01 15:05:46.893542] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d7ba0 is same with the state(5) to be set 00:25:13.949 [2024-12-01 15:05:46.893583] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d7ba0 is same with the state(5) to be set 00:25:13.949 [2024-12-01 15:05:46.893666] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d7ba0 is same with the state(5) to be set 00:25:13.949 [2024-12-01 15:05:46.893729] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d7ba0 is same with the state(5) to be set 00:25:13.949 [2024-12-01 15:05:46.893793] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d7ba0 is same with the state(5) to be set 00:25:13.949 [2024-12-01 15:05:46.893882] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d7ba0 is same with the state(5) to be set 00:25:13.949 [2024-12-01 15:05:46.893969] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d7ba0 is same with the state(5) to be set 00:25:13.949 [2024-12-01 15:05:46.894058] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d7ba0 is same with the state(5) to be set 00:25:13.949 [2024-12-01 15:05:46.894120] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d7ba0 is same with the state(5) to be set 00:25:13.949 [2024-12-01 15:05:46.894194] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d7ba0 is same with the state(5) to be set 00:25:13.949 [2024-12-01 15:05:46.894253] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d7ba0 is same with the state(5) to be set 00:25:13.949 [2024-12-01 15:05:46.894302] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d7ba0 is same with the state(5) to be set 00:25:13.949 [2024-12-01 15:05:46.894373] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d7ba0 is same with the state(5) to be set 00:25:13.949 [2024-12-01 15:05:46.894427] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d7ba0 is same with the state(5) to be set 00:25:13.949 [2024-12-01 15:05:46.894473] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d7ba0 is same with the state(5) to be set 00:25:13.949 [2024-12-01 15:05:46.894524] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d7ba0 is same with the state(5) to be set 00:25:13.949 [2024-12-01 15:05:46.894582] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d7ba0 is same with the state(5) to be set 00:25:13.949 [2024-12-01 15:05:46.894635] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d7ba0 is same with the state(5) to be set 00:25:13.949 [2024-12-01 15:05:46.894687] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d7ba0 is same with the state(5) to be set 00:25:13.949 [2024-12-01 15:05:46.894741] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d7ba0 is same with the state(5) to be set 00:25:13.949 [2024-12-01 15:05:46.894806] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d7ba0 is same with the state(5) to be set 00:25:13.949 [2024-12-01 15:05:46.894903] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d7ba0 is same with the state(5) to be set 00:25:13.949 [2024-12-01 15:05:46.894974] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d7ba0 is same with the state(5) to be set 00:25:13.949 [2024-12-01 15:05:46.895050] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d7ba0 is same with the state(5) to be set 00:25:13.949 [2024-12-01 15:05:46.895137] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d7ba0 is same with the state(5) to be set 00:25:13.949 [2024-12-01 15:05:46.895214] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d7ba0 is same with the state(5) to be set 00:25:13.949 [2024-12-01 15:05:46.895280] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d7ba0 is same with the state(5) to be set 00:25:13.949 [2024-12-01 15:05:46.895356] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d7ba0 is same with the state(5) to be set 00:25:13.949 [2024-12-01 15:05:46.895414] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d7ba0 is same with the state(5) to be set 00:25:13.949 [2024-12-01 15:05:46.895472] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d7ba0 is same with the state(5) to be set 00:25:13.949 [2024-12-01 15:05:46.895572] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d7ba0 is same with the state(5) to be set 00:25:13.949 [2024-12-01 15:05:46.895639] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d7ba0 is same with the state(5) to be set 00:25:13.949 [2024-12-01 15:05:46.895708] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d7ba0 is same with the state(5) to be set 00:25:13.949 [2024-12-01 15:05:46.895788] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d7ba0 is same with the state(5) to be set 00:25:13.949 [2024-12-01 15:05:46.895868] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d7ba0 is same with the state(5) to be set 00:25:13.949 [2024-12-01 15:05:46.895941] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d7ba0 is same with the state(5) to be set 00:25:13.949 [2024-12-01 15:05:46.896027] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d7ba0 is same with the state(5) to be set 00:25:13.949 [2024-12-01 15:05:46.896073] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d7ba0 is same with the state(5) to be set 00:25:13.949 [2024-12-01 15:05:46.896118] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d7ba0 is same with the state(5) to be set 00:25:13.949 [2024-12-01 15:05:46.896182] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d7ba0 is same with the state(5) to be set 00:25:13.949 [2024-12-01 15:05:46.896236] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d7ba0 is same with the state(5) to be set 00:25:13.949 [2024-12-01 15:05:46.896290] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d7ba0 is same with the state(5) to be set 00:25:13.949 [2024-12-01 15:05:46.896335] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d7ba0 is same with the state(5) to be set 00:25:13.949 [2024-12-01 15:05:46.896378] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d7ba0 is same with the state(5) to be set 00:25:13.949 [2024-12-01 15:05:46.896421] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d7ba0 is same with the state(5) to be set 00:25:13.949 [2024-12-01 15:05:46.896738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:33888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.949 [2024-12-01 15:05:46.896809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:13.949 [2024-12-01 15:05:46.896846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:86040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.949 [2024-12-01 15:05:46.896856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:13.949 [2024-12-01 15:05:46.896867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:56680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.949 [2024-12-01 15:05:46.896892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:13.949 [2024-12-01 15:05:46.896903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:22088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.949 [2024-12-01 15:05:46.896912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:13.949 [2024-12-01 15:05:46.896922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:113888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.949 [2024-12-01 15:05:46.896932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:13.949 [2024-12-01 15:05:46.896942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:110000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.949 [2024-12-01 15:05:46.896951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:13.949 [2024-12-01 15:05:46.896961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:53328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.949 [2024-12-01 15:05:46.896970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:13.950 [2024-12-01 15:05:46.896996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:33608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.950 [2024-12-01 15:05:46.897005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:13.950 [2024-12-01 15:05:46.897015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:43120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.950 [2024-12-01 15:05:46.897023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:13.950 [2024-12-01 15:05:46.897033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:126600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.950 [2024-12-01 15:05:46.897042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:13.950 [2024-12-01 15:05:46.897052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:28024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.950 [2024-12-01 15:05:46.897061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:13.950 [2024-12-01 15:05:46.897071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:74136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.950 [2024-12-01 15:05:46.897080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:13.950 [2024-12-01 15:05:46.897090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:11200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.950 [2024-12-01 15:05:46.897099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:13.950 [2024-12-01 15:05:46.897109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:45608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.950 [2024-12-01 15:05:46.897117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:13.950 [2024-12-01 15:05:46.897128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:63552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.950 [2024-12-01 15:05:46.897137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:13.950 [2024-12-01 15:05:46.897147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:104840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.950 [2024-12-01 15:05:46.897156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:13.950 [2024-12-01 15:05:46.897166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:60664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.950 [2024-12-01 15:05:46.897175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:13.950 [2024-12-01 15:05:46.897186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:3464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.950 [2024-12-01 15:05:46.897195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:13.950 [2024-12-01 15:05:46.897204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:40328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.950 [2024-12-01 15:05:46.897213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:13.950 [2024-12-01 15:05:46.897224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:69840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.950 [2024-12-01 15:05:46.897232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:13.950 [2024-12-01 15:05:46.897242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:123824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.950 [2024-12-01 15:05:46.897254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:13.950 [2024-12-01 15:05:46.897265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:39744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.950 [2024-12-01 15:05:46.897274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:13.950 [2024-12-01 15:05:46.897299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:74688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.950 [2024-12-01 15:05:46.897308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:13.950 [2024-12-01 15:05:46.897318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:72368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.950 [2024-12-01 15:05:46.897327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:13.950 [2024-12-01 15:05:46.897337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:108824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.950 [2024-12-01 15:05:46.897346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:13.950 [2024-12-01 15:05:46.897356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:19264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.950 [2024-12-01 15:05:46.897364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:13.950 [2024-12-01 15:05:46.897375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:6848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.950 [2024-12-01 15:05:46.897383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:13.950 [2024-12-01 15:05:46.897393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:59384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.950 [2024-12-01 15:05:46.897402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:13.950 [2024-12-01 15:05:46.897413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:83920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.950 [2024-12-01 15:05:46.897421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:13.950 [2024-12-01 15:05:46.897431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:127768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.950 [2024-12-01 15:05:46.897440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:13.950 [2024-12-01 15:05:46.897451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:93176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.950 [2024-12-01 15:05:46.897459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:13.950 [2024-12-01 15:05:46.897468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:112352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.950 [2024-12-01 15:05:46.897477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:13.950 [2024-12-01 15:05:46.897488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:84984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.950 [2024-12-01 15:05:46.897497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:13.950 [2024-12-01 15:05:46.897506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:89552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.950 [2024-12-01 15:05:46.897515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:13.950 [2024-12-01 15:05:46.897524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:84544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.950 [2024-12-01 15:05:46.897533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:13.950 [2024-12-01 15:05:46.897544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:58160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.950 [2024-12-01 15:05:46.897552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:13.950 [2024-12-01 15:05:46.897562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:70456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.950 [2024-12-01 15:05:46.897571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:13.950 [2024-12-01 15:05:46.897581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:72664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.950 [2024-12-01 15:05:46.897590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:13.950 [2024-12-01 15:05:46.897600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:66048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.950 [2024-12-01 15:05:46.897634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:13.950 [2024-12-01 15:05:46.897645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:112184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.950 [2024-12-01 15:05:46.897655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:13.950 [2024-12-01 15:05:46.897665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:64696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.950 [2024-12-01 15:05:46.897674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:13.950 [2024-12-01 15:05:46.897685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:95040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.950 [2024-12-01 15:05:46.897693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:13.950 [2024-12-01 15:05:46.897704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:35104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.950 [2024-12-01 15:05:46.897713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:13.950 [2024-12-01 15:05:46.897723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:104112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.950 [2024-12-01 15:05:46.897732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:13.950 [2024-12-01 15:05:46.897743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:104720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.950 [2024-12-01 15:05:46.897751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:13.950 [2024-12-01 15:05:46.897762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:36760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.950 [2024-12-01 15:05:46.897783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:13.950 [2024-12-01 15:05:46.897794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:118528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.950 [2024-12-01 15:05:46.897803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:13.950 [2024-12-01 15:05:46.897814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:121048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.950 [2024-12-01 15:05:46.897823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:13.950 [2024-12-01 15:05:46.897834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:14968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.950 [2024-12-01 15:05:46.897842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:13.950 [2024-12-01 15:05:46.897853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:78288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.950 [2024-12-01 15:05:46.897862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:13.950 [2024-12-01 15:05:46.897872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:35976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.950 [2024-12-01 15:05:46.897881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:13.950 [2024-12-01 15:05:46.897891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:102016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.950 [2024-12-01 15:05:46.897900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:13.950 [2024-12-01 15:05:46.897911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:99456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.950 [2024-12-01 15:05:46.897920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:13.950 [2024-12-01 15:05:46.897931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:73840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.950 [2024-12-01 15:05:46.897940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:13.950 [2024-12-01 15:05:46.897950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:37912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.950 [2024-12-01 15:05:46.897959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:13.950 [2024-12-01 15:05:46.897970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:22248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.950 [2024-12-01 15:05:46.897994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:13.950 [2024-12-01 15:05:46.898004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:15432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.950 [2024-12-01 15:05:46.898013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:13.950 [2024-12-01 15:05:46.898023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:71560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.950 [2024-12-01 15:05:46.898032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:13.950 [2024-12-01 15:05:46.898042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:106904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.950 [2024-12-01 15:05:46.898050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:13.950 [2024-12-01 15:05:46.898060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:85712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.950 [2024-12-01 15:05:46.898069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:13.950 [2024-12-01 15:05:46.898079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:80016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.950 [2024-12-01 15:05:46.898088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:13.950 [2024-12-01 15:05:46.898098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:55440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.950 [2024-12-01 15:05:46.898106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:13.950 [2024-12-01 15:05:46.898116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:109160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.950 [2024-12-01 15:05:46.898124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:13.950 [2024-12-01 15:05:46.898134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:123008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.951 [2024-12-01 15:05:46.898142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:13.951 [2024-12-01 15:05:46.898153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:86376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.951 [2024-12-01 15:05:46.898161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:13.951 [2024-12-01 15:05:46.898171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:88728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.951 [2024-12-01 15:05:46.898180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:13.951 [2024-12-01 15:05:46.898190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:36200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.951 [2024-12-01 15:05:46.898199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:13.951 [2024-12-01 15:05:46.898209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:78776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.951 [2024-12-01 15:05:46.898218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:13.951 [2024-12-01 15:05:46.898229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:86136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.951 [2024-12-01 15:05:46.898239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:13.951 [2024-12-01 15:05:46.898249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:31248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.951 [2024-12-01 15:05:46.898258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:13.951 [2024-12-01 15:05:46.898268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:115464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.951 [2024-12-01 15:05:46.898276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:13.951 [2024-12-01 15:05:46.898287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:64960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.951 [2024-12-01 15:05:46.898295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:13.951 [2024-12-01 15:05:46.898306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.951 [2024-12-01 15:05:46.898315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:13.951 [2024-12-01 15:05:46.898325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:59664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.951 [2024-12-01 15:05:46.898333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:13.951 [2024-12-01 15:05:46.898343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:101664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.951 [2024-12-01 15:05:46.898352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:13.951 [2024-12-01 15:05:46.898362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:127320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.951 [2024-12-01 15:05:46.898371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:13.951 [2024-12-01 15:05:46.898381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:130880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.951 [2024-12-01 15:05:46.898389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:13.951 [2024-12-01 15:05:46.898400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:40728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.951 [2024-12-01 15:05:46.898408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:13.951 [2024-12-01 15:05:46.898418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:83744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.951 [2024-12-01 15:05:46.898427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:13.951 [2024-12-01 15:05:46.898438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:126664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.951 [2024-12-01 15:05:46.898447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:13.951 [2024-12-01 15:05:46.898457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:35280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.951 [2024-12-01 15:05:46.898465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:13.951 [2024-12-01 15:05:46.898475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:2336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.951 [2024-12-01 15:05:46.898484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:13.951 [2024-12-01 15:05:46.898494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:71144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.951 [2024-12-01 15:05:46.898503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:13.951 [2024-12-01 15:05:46.898513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:119760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.951 [2024-12-01 15:05:46.898521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:13.951 [2024-12-01 15:05:46.898531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:20512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.951 [2024-12-01 15:05:46.898541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:13.951 [2024-12-01 15:05:46.898551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:96696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.951 [2024-12-01 15:05:46.898560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:13.951 [2024-12-01 15:05:46.898570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:55248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.951 [2024-12-01 15:05:46.898579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:13.951 [2024-12-01 15:05:46.898589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:84920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.951 [2024-12-01 15:05:46.898597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:13.951 [2024-12-01 15:05:46.898607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:115552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.951 [2024-12-01 15:05:46.898616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:13.951 [2024-12-01 15:05:46.898626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:88296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.951 [2024-12-01 15:05:46.898635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:13.951 [2024-12-01 15:05:46.898645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:128200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.951 [2024-12-01 15:05:46.898654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:13.951 [2024-12-01 15:05:46.898664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:106304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.951 [2024-12-01 15:05:46.898672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:13.951 [2024-12-01 15:05:46.898681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:9776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.951 [2024-12-01 15:05:46.898690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:13.951 [2024-12-01 15:05:46.898706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:119840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.951 [2024-12-01 15:05:46.898715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:13.951 [2024-12-01 15:05:46.898725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:44592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.951 [2024-12-01 15:05:46.898734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:13.951 [2024-12-01 15:05:46.898744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:1200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.951 [2024-12-01 15:05:46.898752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:13.951 [2024-12-01 15:05:46.898762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:7336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.951 [2024-12-01 15:05:46.898781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:13.951 [2024-12-01 15:05:46.898798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:77432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.951 [2024-12-01 15:05:46.898807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:13.951 [2024-12-01 15:05:46.898817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:28400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.951 [2024-12-01 15:05:46.898841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:13.951 [2024-12-01 15:05:46.898851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:102344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.951 [2024-12-01 15:05:46.898860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:13.951 [2024-12-01 15:05:46.898870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:23680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.951 [2024-12-01 15:05:46.898884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:13.951 [2024-12-01 15:05:46.898894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:127080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.951 [2024-12-01 15:05:46.898902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:13.951 [2024-12-01 15:05:46.898912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:105328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.951 [2024-12-01 15:05:46.898920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:13.951 [2024-12-01 15:05:46.898930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:123352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.951 [2024-12-01 15:05:46.898938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:13.951 [2024-12-01 15:05:46.898948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:122064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.951 [2024-12-01 15:05:46.898956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:13.951 [2024-12-01 15:05:46.898965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:22528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.951 [2024-12-01 15:05:46.898974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:13.951 [2024-12-01 15:05:46.898983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:90496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.951 [2024-12-01 15:05:46.898991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:13.951 [2024-12-01 15:05:46.899001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:14744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.951 [2024-12-01 15:05:46.899009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:13.951 [2024-12-01 15:05:46.899020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:91720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.951 [2024-12-01 15:05:46.899028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:13.951 [2024-12-01 15:05:46.899043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:26808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.951 [2024-12-01 15:05:46.899051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:13.951 [2024-12-01 15:05:46.899061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:116888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.951 [2024-12-01 15:05:46.899070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:13.952 [2024-12-01 15:05:46.899080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:123720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.952 [2024-12-01 15:05:46.899088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:13.952 [2024-12-01 15:05:46.899098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:115448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.952 [2024-12-01 15:05:46.899106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:13.952 [2024-12-01 15:05:46.899116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:48024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.952 [2024-12-01 15:05:46.899125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:13.952 [2024-12-01 15:05:46.899135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:41360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.952 [2024-12-01 15:05:46.899143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:13.952 [2024-12-01 15:05:46.899152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:2224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.952 [2024-12-01 15:05:46.899160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:13.952 [2024-12-01 15:05:46.899169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:110232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.952 [2024-12-01 15:05:46.899182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:13.952 [2024-12-01 15:05:46.899193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:110368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.952 [2024-12-01 15:05:46.899201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:13.952 [2024-12-01 15:05:46.899211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:128736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.952 [2024-12-01 15:05:46.899220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:13.952 [2024-12-01 15:05:46.899229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:24864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.952 [2024-12-01 15:05:46.899237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:13.952 [2024-12-01 15:05:46.899247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:9792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.952 [2024-12-01 15:05:46.899255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:13.952 [2024-12-01 15:05:46.899265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:96088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.952 [2024-12-01 15:05:46.899273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:13.952 [2024-12-01 15:05:46.899283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:80976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.952 [2024-12-01 15:05:46.899291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:13.952 [2024-12-01 15:05:46.899300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:123216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.952 [2024-12-01 15:05:46.899308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:13.952 [2024-12-01 15:05:46.899317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:120992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.952 [2024-12-01 15:05:46.899326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:13.952 [2024-12-01 15:05:46.899340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:28104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.952 [2024-12-01 15:05:46.899348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:13.952 [2024-12-01 15:05:46.899358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:40792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.952 [2024-12-01 15:05:46.899367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:13.952 [2024-12-01 15:05:46.899376] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1409780 is same with the state(5) to be set 00:25:13.952 [2024-12-01 15:05:46.899388] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:13.952 [2024-12-01 15:05:46.899395] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:13.952 [2024-12-01 15:05:46.899402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:117632 len:8 PRP1 0x0 PRP2 0x0 00:25:13.952 [2024-12-01 15:05:46.899411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:13.952 [2024-12-01 15:05:46.899460] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1409780 was disconnected and freed. reset controller. 00:25:13.952 [2024-12-01 15:05:46.899556] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:25:13.952 [2024-12-01 15:05:46.899573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:13.952 [2024-12-01 15:05:46.899583] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:13.952 [2024-12-01 15:05:46.899591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:13.952 [2024-12-01 15:05:46.899600] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:13.952 [2024-12-01 15:05:46.899613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:13.952 [2024-12-01 15:05:46.899622] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:13.952 [2024-12-01 15:05:46.899630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:13.952 [2024-12-01 15:05:46.899638] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13848c0 is same with the state(5) to be set 00:25:13.952 [2024-12-01 15:05:46.899860] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:13.952 [2024-12-01 15:05:46.899889] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13848c0 (9): Bad file descriptor 00:25:13.952 [2024-12-01 15:05:46.905013] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13848c0 (9): Bad file descriptor 00:25:13.952 [2024-12-01 15:05:46.905047] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:13.952 [2024-12-01 15:05:46.905058] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:13.952 [2024-12-01 15:05:46.905069] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:13.952 [2024-12-01 15:05:46.905087] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:13.952 [2024-12-01 15:05:46.905098] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:13.952 15:05:46 -- host/timeout.sh@128 -- # wait 101085 00:25:15.856 [2024-12-01 15:05:48.905254] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:15.856 [2024-12-01 15:05:48.905345] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:15.856 [2024-12-01 15:05:48.905361] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13848c0 with addr=10.0.0.2, port=4420 00:25:15.856 [2024-12-01 15:05:48.905373] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13848c0 is same with the state(5) to be set 00:25:15.856 [2024-12-01 15:05:48.905396] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13848c0 (9): Bad file descriptor 00:25:15.856 [2024-12-01 15:05:48.905415] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:15.856 [2024-12-01 15:05:48.905424] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:15.856 [2024-12-01 15:05:48.905433] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:15.856 [2024-12-01 15:05:48.905457] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:15.856 [2024-12-01 15:05:48.905469] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:18.391 [2024-12-01 15:05:50.905657] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.391 [2024-12-01 15:05:50.905749] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.391 [2024-12-01 15:05:50.905776] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13848c0 with addr=10.0.0.2, port=4420 00:25:18.391 [2024-12-01 15:05:50.905790] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13848c0 is same with the state(5) to be set 00:25:18.391 [2024-12-01 15:05:50.905816] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13848c0 (9): Bad file descriptor 00:25:18.391 [2024-12-01 15:05:50.905835] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:18.391 [2024-12-01 15:05:50.905844] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:18.391 [2024-12-01 15:05:50.905854] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:18.391 [2024-12-01 15:05:50.905880] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:18.391 [2024-12-01 15:05:50.905892] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:20.293 [2024-12-01 15:05:52.905956] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:20.293 [2024-12-01 15:05:52.906004] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:20.293 [2024-12-01 15:05:52.906013] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:20.293 [2024-12-01 15:05:52.906022] nvme_ctrlr.c:1017:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] already in failed state 00:25:20.293 [2024-12-01 15:05:52.906046] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:20.859 00:25:20.859 Latency(us) 00:25:20.859 [2024-12-01T15:05:53.975Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:20.860 [2024-12-01T15:05:53.975Z] Job: NVMe0n1 (Core Mask 0x4, workload: randread, depth: 128, IO size: 4096) 00:25:20.860 NVMe0n1 : 8.21 3114.50 12.17 15.59 0.00 40841.31 3381.06 7046430.72 00:25:20.860 [2024-12-01T15:05:53.975Z] =================================================================================================================== 00:25:20.860 [2024-12-01T15:05:53.975Z] Total : 3114.50 12.17 15.59 0.00 40841.31 3381.06 7046430.72 00:25:20.860 0 00:25:20.860 15:05:53 -- host/timeout.sh@129 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:25:20.860 Attaching 5 probes... 00:25:20.860 1384.238584: reset bdev controller NVMe0 00:25:20.860 1384.308351: reconnect bdev controller NVMe0 00:25:20.860 3389.510412: reconnect delay bdev controller NVMe0 00:25:20.860 3389.528203: reconnect bdev controller NVMe0 00:25:20.860 5389.881372: reconnect delay bdev controller NVMe0 00:25:20.860 5389.898756: reconnect bdev controller NVMe0 00:25:20.860 7390.316211: reconnect delay bdev controller NVMe0 00:25:20.860 7390.335101: reconnect bdev controller NVMe0 00:25:20.860 15:05:53 -- host/timeout.sh@132 -- # grep -c 'reconnect delay bdev controller NVMe0' 00:25:20.860 15:05:53 -- host/timeout.sh@132 -- # (( 3 <= 2 )) 00:25:20.860 15:05:53 -- host/timeout.sh@136 -- # kill 101029 00:25:20.860 15:05:53 -- host/timeout.sh@137 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:25:20.860 15:05:53 -- host/timeout.sh@139 -- # killprocess 101000 00:25:20.860 15:05:53 -- common/autotest_common.sh@936 -- # '[' -z 101000 ']' 00:25:20.860 15:05:53 -- common/autotest_common.sh@940 -- # kill -0 101000 00:25:20.860 15:05:53 -- common/autotest_common.sh@941 -- # uname 00:25:20.860 15:05:53 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:25:20.860 15:05:53 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 101000 00:25:20.860 killing process with pid 101000 00:25:20.860 Received shutdown signal, test time was about 8.277227 seconds 00:25:20.860 00:25:20.860 Latency(us) 00:25:20.860 [2024-12-01T15:05:53.975Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:20.860 [2024-12-01T15:05:53.975Z] =================================================================================================================== 00:25:20.860 [2024-12-01T15:05:53.975Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:25:20.860 15:05:53 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:25:20.860 15:05:53 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:25:20.860 15:05:53 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 101000' 00:25:20.860 15:05:53 -- common/autotest_common.sh@955 -- # kill 101000 00:25:20.860 15:05:53 -- common/autotest_common.sh@960 -- # wait 101000 00:25:21.119 15:05:54 -- host/timeout.sh@141 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:25:21.378 15:05:54 -- host/timeout.sh@143 -- # trap - SIGINT SIGTERM EXIT 00:25:21.378 15:05:54 -- host/timeout.sh@145 -- # nvmftestfini 00:25:21.378 15:05:54 -- nvmf/common.sh@476 -- # nvmfcleanup 00:25:21.378 15:05:54 -- nvmf/common.sh@116 -- # sync 00:25:21.637 15:05:54 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:25:21.637 15:05:54 -- nvmf/common.sh@119 -- # set +e 00:25:21.637 15:05:54 -- nvmf/common.sh@120 -- # for i in {1..20} 00:25:21.637 15:05:54 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:25:21.637 rmmod nvme_tcp 00:25:21.896 rmmod nvme_fabrics 00:25:21.896 rmmod nvme_keyring 00:25:21.896 15:05:54 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:25:21.896 15:05:54 -- nvmf/common.sh@123 -- # set -e 00:25:21.896 15:05:54 -- nvmf/common.sh@124 -- # return 0 00:25:21.896 15:05:54 -- nvmf/common.sh@477 -- # '[' -n 100406 ']' 00:25:21.896 15:05:54 -- nvmf/common.sh@478 -- # killprocess 100406 00:25:21.896 15:05:54 -- common/autotest_common.sh@936 -- # '[' -z 100406 ']' 00:25:21.896 15:05:54 -- common/autotest_common.sh@940 -- # kill -0 100406 00:25:21.896 15:05:54 -- common/autotest_common.sh@941 -- # uname 00:25:21.896 15:05:54 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:25:21.896 15:05:54 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 100406 00:25:21.896 killing process with pid 100406 00:25:21.896 15:05:54 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:25:21.896 15:05:54 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:25:21.896 15:05:54 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 100406' 00:25:21.896 15:05:54 -- common/autotest_common.sh@955 -- # kill 100406 00:25:21.896 15:05:54 -- common/autotest_common.sh@960 -- # wait 100406 00:25:22.154 15:05:55 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:25:22.154 15:05:55 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:25:22.154 15:05:55 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:25:22.154 15:05:55 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:25:22.154 15:05:55 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:25:22.154 15:05:55 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:22.154 15:05:55 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:22.154 15:05:55 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:22.154 15:05:55 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:25:22.154 00:25:22.154 real 0m47.287s 00:25:22.154 user 2m18.437s 00:25:22.154 sys 0m5.092s 00:25:22.154 15:05:55 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:25:22.154 15:05:55 -- common/autotest_common.sh@10 -- # set +x 00:25:22.154 ************************************ 00:25:22.154 END TEST nvmf_timeout 00:25:22.154 ************************************ 00:25:22.154 15:05:55 -- nvmf/nvmf.sh@120 -- # [[ virt == phy ]] 00:25:22.154 15:05:55 -- nvmf/nvmf.sh@127 -- # timing_exit host 00:25:22.154 15:05:55 -- common/autotest_common.sh@728 -- # xtrace_disable 00:25:22.154 15:05:55 -- common/autotest_common.sh@10 -- # set +x 00:25:22.154 15:05:55 -- nvmf/nvmf.sh@129 -- # trap - SIGINT SIGTERM EXIT 00:25:22.154 00:25:22.154 real 17m28.790s 00:25:22.154 user 55m37.003s 00:25:22.154 sys 3m45.875s 00:25:22.154 15:05:55 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:25:22.154 15:05:55 -- common/autotest_common.sh@10 -- # set +x 00:25:22.154 ************************************ 00:25:22.154 END TEST nvmf_tcp 00:25:22.154 ************************************ 00:25:22.413 15:05:55 -- spdk/autotest.sh@283 -- # [[ 0 -eq 0 ]] 00:25:22.413 15:05:55 -- spdk/autotest.sh@284 -- # run_test spdkcli_nvmf_tcp /home/vagrant/spdk_repo/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:25:22.413 15:05:55 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:25:22.413 15:05:55 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:25:22.413 15:05:55 -- common/autotest_common.sh@10 -- # set +x 00:25:22.413 ************************************ 00:25:22.413 START TEST spdkcli_nvmf_tcp 00:25:22.413 ************************************ 00:25:22.413 15:05:55 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:25:22.413 * Looking for test storage... 00:25:22.413 * Found test storage at /home/vagrant/spdk_repo/spdk/test/spdkcli 00:25:22.413 15:05:55 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:25:22.413 15:05:55 -- common/autotest_common.sh@1690 -- # lcov --version 00:25:22.413 15:05:55 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:25:22.413 15:05:55 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:25:22.413 15:05:55 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:25:22.413 15:05:55 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:25:22.413 15:05:55 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:25:22.414 15:05:55 -- scripts/common.sh@335 -- # IFS=.-: 00:25:22.414 15:05:55 -- scripts/common.sh@335 -- # read -ra ver1 00:25:22.414 15:05:55 -- scripts/common.sh@336 -- # IFS=.-: 00:25:22.414 15:05:55 -- scripts/common.sh@336 -- # read -ra ver2 00:25:22.414 15:05:55 -- scripts/common.sh@337 -- # local 'op=<' 00:25:22.414 15:05:55 -- scripts/common.sh@339 -- # ver1_l=2 00:25:22.414 15:05:55 -- scripts/common.sh@340 -- # ver2_l=1 00:25:22.414 15:05:55 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:25:22.414 15:05:55 -- scripts/common.sh@343 -- # case "$op" in 00:25:22.414 15:05:55 -- scripts/common.sh@344 -- # : 1 00:25:22.414 15:05:55 -- scripts/common.sh@363 -- # (( v = 0 )) 00:25:22.414 15:05:55 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:22.414 15:05:55 -- scripts/common.sh@364 -- # decimal 1 00:25:22.414 15:05:55 -- scripts/common.sh@352 -- # local d=1 00:25:22.414 15:05:55 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:25:22.414 15:05:55 -- scripts/common.sh@354 -- # echo 1 00:25:22.414 15:05:55 -- scripts/common.sh@364 -- # ver1[v]=1 00:25:22.414 15:05:55 -- scripts/common.sh@365 -- # decimal 2 00:25:22.414 15:05:55 -- scripts/common.sh@352 -- # local d=2 00:25:22.414 15:05:55 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:25:22.414 15:05:55 -- scripts/common.sh@354 -- # echo 2 00:25:22.414 15:05:55 -- scripts/common.sh@365 -- # ver2[v]=2 00:25:22.414 15:05:55 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:25:22.414 15:05:55 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:25:22.414 15:05:55 -- scripts/common.sh@367 -- # return 0 00:25:22.414 15:05:55 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:25:22.414 15:05:55 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:25:22.414 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:22.414 --rc genhtml_branch_coverage=1 00:25:22.414 --rc genhtml_function_coverage=1 00:25:22.414 --rc genhtml_legend=1 00:25:22.414 --rc geninfo_all_blocks=1 00:25:22.414 --rc geninfo_unexecuted_blocks=1 00:25:22.414 00:25:22.414 ' 00:25:22.414 15:05:55 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:25:22.414 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:22.414 --rc genhtml_branch_coverage=1 00:25:22.414 --rc genhtml_function_coverage=1 00:25:22.414 --rc genhtml_legend=1 00:25:22.414 --rc geninfo_all_blocks=1 00:25:22.414 --rc geninfo_unexecuted_blocks=1 00:25:22.414 00:25:22.414 ' 00:25:22.414 15:05:55 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:25:22.414 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:22.414 --rc genhtml_branch_coverage=1 00:25:22.414 --rc genhtml_function_coverage=1 00:25:22.414 --rc genhtml_legend=1 00:25:22.414 --rc geninfo_all_blocks=1 00:25:22.414 --rc geninfo_unexecuted_blocks=1 00:25:22.414 00:25:22.414 ' 00:25:22.414 15:05:55 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:25:22.414 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:22.414 --rc genhtml_branch_coverage=1 00:25:22.414 --rc genhtml_function_coverage=1 00:25:22.414 --rc genhtml_legend=1 00:25:22.414 --rc geninfo_all_blocks=1 00:25:22.414 --rc geninfo_unexecuted_blocks=1 00:25:22.414 00:25:22.414 ' 00:25:22.414 15:05:55 -- spdkcli/nvmf.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:25:22.414 15:05:55 -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:25:22.414 15:05:55 -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:25:22.414 15:05:55 -- spdkcli/nvmf.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:25:22.414 15:05:55 -- nvmf/common.sh@7 -- # uname -s 00:25:22.414 15:05:55 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:22.414 15:05:55 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:22.414 15:05:55 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:22.414 15:05:55 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:22.414 15:05:55 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:22.414 15:05:55 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:22.414 15:05:55 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:22.414 15:05:55 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:22.414 15:05:55 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:22.414 15:05:55 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:22.414 15:05:55 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:2d843004-a791-47f3-8dd7-3d04462c368b 00:25:22.414 15:05:55 -- nvmf/common.sh@18 -- # NVME_HOSTID=2d843004-a791-47f3-8dd7-3d04462c368b 00:25:22.414 15:05:55 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:22.414 15:05:55 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:22.414 15:05:55 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:25:22.414 15:05:55 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:25:22.414 15:05:55 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:22.414 15:05:55 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:22.414 15:05:55 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:22.414 15:05:55 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:22.414 15:05:55 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:22.414 15:05:55 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:22.414 15:05:55 -- paths/export.sh@5 -- # export PATH 00:25:22.414 15:05:55 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:22.414 15:05:55 -- nvmf/common.sh@46 -- # : 0 00:25:22.414 15:05:55 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:25:22.414 15:05:55 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:25:22.414 15:05:55 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:25:22.414 15:05:55 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:22.414 15:05:55 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:22.414 15:05:55 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:25:22.414 15:05:55 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:25:22.414 15:05:55 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:25:22.414 15:05:55 -- spdkcli/nvmf.sh@12 -- # MATCH_FILE=spdkcli_nvmf.test 00:25:22.414 15:05:55 -- spdkcli/nvmf.sh@13 -- # SPDKCLI_BRANCH=/nvmf 00:25:22.414 15:05:55 -- spdkcli/nvmf.sh@15 -- # trap cleanup EXIT 00:25:22.414 15:05:55 -- spdkcli/nvmf.sh@17 -- # timing_enter run_nvmf_tgt 00:25:22.414 15:05:55 -- common/autotest_common.sh@722 -- # xtrace_disable 00:25:22.414 15:05:55 -- common/autotest_common.sh@10 -- # set +x 00:25:22.414 15:05:55 -- spdkcli/nvmf.sh@18 -- # run_nvmf_tgt 00:25:22.414 15:05:55 -- spdkcli/common.sh@33 -- # nvmf_tgt_pid=101320 00:25:22.414 15:05:55 -- spdkcli/common.sh@34 -- # waitforlisten 101320 00:25:22.414 15:05:55 -- common/autotest_common.sh@829 -- # '[' -z 101320 ']' 00:25:22.414 15:05:55 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:22.414 15:05:55 -- common/autotest_common.sh@834 -- # local max_retries=100 00:25:22.414 15:05:55 -- spdkcli/common.sh@32 -- # /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -m 0x3 -p 0 00:25:22.414 15:05:55 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:22.415 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:22.415 15:05:55 -- common/autotest_common.sh@838 -- # xtrace_disable 00:25:22.415 15:05:55 -- common/autotest_common.sh@10 -- # set +x 00:25:22.672 [2024-12-01 15:05:55.571342] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:25:22.672 [2024-12-01 15:05:55.571453] [ DPDK EAL parameters: nvmf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid101320 ] 00:25:22.672 [2024-12-01 15:05:55.702968] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:25:22.931 [2024-12-01 15:05:55.790098] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:25:22.931 [2024-12-01 15:05:55.790386] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:25:22.931 [2024-12-01 15:05:55.790398] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:23.498 15:05:56 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:25:23.498 15:05:56 -- common/autotest_common.sh@862 -- # return 0 00:25:23.498 15:05:56 -- spdkcli/nvmf.sh@19 -- # timing_exit run_nvmf_tgt 00:25:23.498 15:05:56 -- common/autotest_common.sh@728 -- # xtrace_disable 00:25:23.499 15:05:56 -- common/autotest_common.sh@10 -- # set +x 00:25:23.499 15:05:56 -- spdkcli/nvmf.sh@21 -- # NVMF_TARGET_IP=127.0.0.1 00:25:23.499 15:05:56 -- spdkcli/nvmf.sh@22 -- # [[ tcp == \r\d\m\a ]] 00:25:23.499 15:05:56 -- spdkcli/nvmf.sh@27 -- # timing_enter spdkcli_create_nvmf_config 00:25:23.499 15:05:56 -- common/autotest_common.sh@722 -- # xtrace_disable 00:25:23.499 15:05:56 -- common/autotest_common.sh@10 -- # set +x 00:25:23.499 15:05:56 -- spdkcli/nvmf.sh@65 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc create 32 512 Malloc1'\'' '\''Malloc1'\'' True 00:25:23.499 '\''/bdevs/malloc create 32 512 Malloc2'\'' '\''Malloc2'\'' True 00:25:23.499 '\''/bdevs/malloc create 32 512 Malloc3'\'' '\''Malloc3'\'' True 00:25:23.499 '\''/bdevs/malloc create 32 512 Malloc4'\'' '\''Malloc4'\'' True 00:25:23.499 '\''/bdevs/malloc create 32 512 Malloc5'\'' '\''Malloc5'\'' True 00:25:23.499 '\''/bdevs/malloc create 32 512 Malloc6'\'' '\''Malloc6'\'' True 00:25:23.499 '\''nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192'\'' '\'''\'' True 00:25:23.499 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:25:23.499 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1'\'' '\''Malloc3'\'' True 00:25:23.499 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2'\'' '\''Malloc4'\'' True 00:25:23.499 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:25:23.499 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:25:23.499 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2'\'' '\''Malloc2'\'' True 00:25:23.499 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:25:23.499 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:25:23.499 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1'\'' '\''Malloc1'\'' True 00:25:23.499 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:25:23.499 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:25:23.499 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:25:23.499 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:25:23.499 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True'\'' '\''Allow any host'\'' 00:25:23.499 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False'\'' '\''Allow any host'\'' True 00:25:23.499 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:25:23.499 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4'\'' '\''127.0.0.1:4262'\'' True 00:25:23.499 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:25:23.499 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5'\'' '\''Malloc5'\'' True 00:25:23.499 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6'\'' '\''Malloc6'\'' True 00:25:23.499 '\''/nvmf/referral create tcp 127.0.0.2 4030 IPv4'\'' 00:25:23.499 ' 00:25:24.066 [2024-12-01 15:05:57.021680] nvmf_rpc.c: 275:rpc_nvmf_get_subsystems: *WARNING*: rpc_nvmf_get_subsystems: deprecated feature listener.transport is deprecated in favor of trtype to be removed in v24.05 00:25:26.600 [2024-12-01 15:05:59.292135] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:27.535 [2024-12-01 15:06:00.586034] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4260 *** 00:25:30.067 [2024-12-01 15:06:02.989214] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4261 *** 00:25:31.969 [2024-12-01 15:06:05.051904] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4262 *** 00:25:33.866 Executing command: ['/bdevs/malloc create 32 512 Malloc1', 'Malloc1', True] 00:25:33.866 Executing command: ['/bdevs/malloc create 32 512 Malloc2', 'Malloc2', True] 00:25:33.866 Executing command: ['/bdevs/malloc create 32 512 Malloc3', 'Malloc3', True] 00:25:33.866 Executing command: ['/bdevs/malloc create 32 512 Malloc4', 'Malloc4', True] 00:25:33.866 Executing command: ['/bdevs/malloc create 32 512 Malloc5', 'Malloc5', True] 00:25:33.866 Executing command: ['/bdevs/malloc create 32 512 Malloc6', 'Malloc6', True] 00:25:33.866 Executing command: ['nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192', '', True] 00:25:33.866 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode1', True] 00:25:33.866 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1', 'Malloc3', True] 00:25:33.866 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2', 'Malloc4', True] 00:25:33.866 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:25:33.866 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:25:33.866 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2', 'Malloc2', True] 00:25:33.866 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:25:33.866 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:25:33.866 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1', 'Malloc1', True] 00:25:33.866 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:25:33.866 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:25:33.866 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1', 'nqn.2014-08.org.spdk:cnode1', True] 00:25:33.866 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:25:33.866 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True', 'Allow any host', False] 00:25:33.866 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False', 'Allow any host', True] 00:25:33.866 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:25:33.866 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4', '127.0.0.1:4262', True] 00:25:33.866 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:25:33.866 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5', 'Malloc5', True] 00:25:33.866 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6', 'Malloc6', True] 00:25:33.866 Executing command: ['/nvmf/referral create tcp 127.0.0.2 4030 IPv4', False] 00:25:33.866 15:06:06 -- spdkcli/nvmf.sh@66 -- # timing_exit spdkcli_create_nvmf_config 00:25:33.866 15:06:06 -- common/autotest_common.sh@728 -- # xtrace_disable 00:25:33.866 15:06:06 -- common/autotest_common.sh@10 -- # set +x 00:25:33.866 15:06:06 -- spdkcli/nvmf.sh@68 -- # timing_enter spdkcli_check_match 00:25:33.866 15:06:06 -- common/autotest_common.sh@722 -- # xtrace_disable 00:25:33.866 15:06:06 -- common/autotest_common.sh@10 -- # set +x 00:25:33.866 15:06:06 -- spdkcli/nvmf.sh@69 -- # check_match 00:25:33.866 15:06:06 -- spdkcli/common.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/spdkcli.py ll /nvmf 00:25:34.430 15:06:07 -- spdkcli/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/test/app/match/match /home/vagrant/spdk_repo/spdk/test/spdkcli/match_files/spdkcli_nvmf.test.match 00:25:34.430 15:06:07 -- spdkcli/common.sh@46 -- # rm -f /home/vagrant/spdk_repo/spdk/test/spdkcli/match_files/spdkcli_nvmf.test 00:25:34.430 15:06:07 -- spdkcli/nvmf.sh@70 -- # timing_exit spdkcli_check_match 00:25:34.430 15:06:07 -- common/autotest_common.sh@728 -- # xtrace_disable 00:25:34.430 15:06:07 -- common/autotest_common.sh@10 -- # set +x 00:25:34.430 15:06:07 -- spdkcli/nvmf.sh@72 -- # timing_enter spdkcli_clear_nvmf_config 00:25:34.430 15:06:07 -- common/autotest_common.sh@722 -- # xtrace_disable 00:25:34.430 15:06:07 -- common/autotest_common.sh@10 -- # set +x 00:25:34.430 15:06:07 -- spdkcli/nvmf.sh@87 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py ''\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1'\'' '\''Malloc3'\'' 00:25:34.430 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all'\'' '\''Malloc4'\'' 00:25:34.430 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:25:34.430 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' 00:25:34.430 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262'\'' '\''127.0.0.1:4262'\'' 00:25:34.430 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all'\'' '\''127.0.0.1:4261'\'' 00:25:34.430 '\''/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3'\'' '\''nqn.2014-08.org.spdk:cnode3'\'' 00:25:34.431 '\''/nvmf/subsystem delete_all'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:25:34.431 '\''/bdevs/malloc delete Malloc6'\'' '\''Malloc6'\'' 00:25:34.431 '\''/bdevs/malloc delete Malloc5'\'' '\''Malloc5'\'' 00:25:34.431 '\''/bdevs/malloc delete Malloc4'\'' '\''Malloc4'\'' 00:25:34.431 '\''/bdevs/malloc delete Malloc3'\'' '\''Malloc3'\'' 00:25:34.431 '\''/bdevs/malloc delete Malloc2'\'' '\''Malloc2'\'' 00:25:34.431 '\''/bdevs/malloc delete Malloc1'\'' '\''Malloc1'\'' 00:25:34.431 ' 00:25:39.699 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1', 'Malloc3', False] 00:25:39.699 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all', 'Malloc4', False] 00:25:39.699 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', False] 00:25:39.699 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all', 'nqn.2014-08.org.spdk:cnode1', False] 00:25:39.699 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262', '127.0.0.1:4262', False] 00:25:39.699 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all', '127.0.0.1:4261', False] 00:25:39.699 Executing command: ['/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3', 'nqn.2014-08.org.spdk:cnode3', False] 00:25:39.699 Executing command: ['/nvmf/subsystem delete_all', 'nqn.2014-08.org.spdk:cnode2', False] 00:25:39.699 Executing command: ['/bdevs/malloc delete Malloc6', 'Malloc6', False] 00:25:39.699 Executing command: ['/bdevs/malloc delete Malloc5', 'Malloc5', False] 00:25:39.699 Executing command: ['/bdevs/malloc delete Malloc4', 'Malloc4', False] 00:25:39.699 Executing command: ['/bdevs/malloc delete Malloc3', 'Malloc3', False] 00:25:39.699 Executing command: ['/bdevs/malloc delete Malloc2', 'Malloc2', False] 00:25:39.699 Executing command: ['/bdevs/malloc delete Malloc1', 'Malloc1', False] 00:25:39.958 15:06:12 -- spdkcli/nvmf.sh@88 -- # timing_exit spdkcli_clear_nvmf_config 00:25:39.958 15:06:12 -- common/autotest_common.sh@728 -- # xtrace_disable 00:25:39.958 15:06:12 -- common/autotest_common.sh@10 -- # set +x 00:25:39.958 15:06:12 -- spdkcli/nvmf.sh@90 -- # killprocess 101320 00:25:39.958 15:06:12 -- common/autotest_common.sh@936 -- # '[' -z 101320 ']' 00:25:39.958 15:06:12 -- common/autotest_common.sh@940 -- # kill -0 101320 00:25:39.958 15:06:12 -- common/autotest_common.sh@941 -- # uname 00:25:39.958 15:06:12 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:25:39.958 15:06:12 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 101320 00:25:39.958 15:06:12 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:25:39.958 15:06:12 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:25:39.958 killing process with pid 101320 00:25:39.958 15:06:12 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 101320' 00:25:39.958 15:06:12 -- common/autotest_common.sh@955 -- # kill 101320 00:25:39.958 [2024-12-01 15:06:12.948281] app.c: 883:log_deprecation_hits: *WARNING*: rpc_nvmf_get_subsystems: deprecation 'listener.transport is deprecated in favor of trtype' scheduled for removal in v24.05 hit 1 times 00:25:39.958 15:06:12 -- common/autotest_common.sh@960 -- # wait 101320 00:25:40.217 15:06:13 -- spdkcli/nvmf.sh@1 -- # cleanup 00:25:40.217 15:06:13 -- spdkcli/common.sh@10 -- # '[' -n '' ']' 00:25:40.217 15:06:13 -- spdkcli/common.sh@13 -- # '[' -n 101320 ']' 00:25:40.217 15:06:13 -- spdkcli/common.sh@14 -- # killprocess 101320 00:25:40.217 15:06:13 -- common/autotest_common.sh@936 -- # '[' -z 101320 ']' 00:25:40.217 15:06:13 -- common/autotest_common.sh@940 -- # kill -0 101320 00:25:40.217 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 940: kill: (101320) - No such process 00:25:40.217 Process with pid 101320 is not found 00:25:40.217 15:06:13 -- common/autotest_common.sh@963 -- # echo 'Process with pid 101320 is not found' 00:25:40.217 15:06:13 -- spdkcli/common.sh@16 -- # '[' -n '' ']' 00:25:40.217 15:06:13 -- spdkcli/common.sh@19 -- # '[' -n '' ']' 00:25:40.217 15:06:13 -- spdkcli/common.sh@22 -- # rm -f /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_nvmf.test /home/vagrant/spdk_repo/spdk/test/spdkcli/match_files/spdkcli_details_vhost.test /tmp/sample_aio 00:25:40.217 00:25:40.217 real 0m17.842s 00:25:40.217 user 0m38.675s 00:25:40.217 sys 0m0.970s 00:25:40.217 15:06:13 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:25:40.217 ************************************ 00:25:40.217 15:06:13 -- common/autotest_common.sh@10 -- # set +x 00:25:40.217 END TEST spdkcli_nvmf_tcp 00:25:40.217 ************************************ 00:25:40.217 15:06:13 -- spdk/autotest.sh@285 -- # run_test nvmf_identify_passthru /home/vagrant/spdk_repo/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:25:40.217 15:06:13 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:25:40.217 15:06:13 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:25:40.217 15:06:13 -- common/autotest_common.sh@10 -- # set +x 00:25:40.217 ************************************ 00:25:40.217 START TEST nvmf_identify_passthru 00:25:40.217 ************************************ 00:25:40.217 15:06:13 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:25:40.217 * Looking for test storage... 00:25:40.217 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:25:40.217 15:06:13 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:25:40.217 15:06:13 -- common/autotest_common.sh@1690 -- # lcov --version 00:25:40.217 15:06:13 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:25:40.476 15:06:13 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:25:40.476 15:06:13 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:25:40.476 15:06:13 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:25:40.476 15:06:13 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:25:40.476 15:06:13 -- scripts/common.sh@335 -- # IFS=.-: 00:25:40.476 15:06:13 -- scripts/common.sh@335 -- # read -ra ver1 00:25:40.476 15:06:13 -- scripts/common.sh@336 -- # IFS=.-: 00:25:40.476 15:06:13 -- scripts/common.sh@336 -- # read -ra ver2 00:25:40.476 15:06:13 -- scripts/common.sh@337 -- # local 'op=<' 00:25:40.476 15:06:13 -- scripts/common.sh@339 -- # ver1_l=2 00:25:40.476 15:06:13 -- scripts/common.sh@340 -- # ver2_l=1 00:25:40.476 15:06:13 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:25:40.476 15:06:13 -- scripts/common.sh@343 -- # case "$op" in 00:25:40.476 15:06:13 -- scripts/common.sh@344 -- # : 1 00:25:40.476 15:06:13 -- scripts/common.sh@363 -- # (( v = 0 )) 00:25:40.476 15:06:13 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:40.476 15:06:13 -- scripts/common.sh@364 -- # decimal 1 00:25:40.476 15:06:13 -- scripts/common.sh@352 -- # local d=1 00:25:40.476 15:06:13 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:25:40.476 15:06:13 -- scripts/common.sh@354 -- # echo 1 00:25:40.476 15:06:13 -- scripts/common.sh@364 -- # ver1[v]=1 00:25:40.476 15:06:13 -- scripts/common.sh@365 -- # decimal 2 00:25:40.476 15:06:13 -- scripts/common.sh@352 -- # local d=2 00:25:40.476 15:06:13 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:25:40.476 15:06:13 -- scripts/common.sh@354 -- # echo 2 00:25:40.476 15:06:13 -- scripts/common.sh@365 -- # ver2[v]=2 00:25:40.476 15:06:13 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:25:40.476 15:06:13 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:25:40.476 15:06:13 -- scripts/common.sh@367 -- # return 0 00:25:40.476 15:06:13 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:25:40.476 15:06:13 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:25:40.476 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:40.476 --rc genhtml_branch_coverage=1 00:25:40.476 --rc genhtml_function_coverage=1 00:25:40.476 --rc genhtml_legend=1 00:25:40.476 --rc geninfo_all_blocks=1 00:25:40.476 --rc geninfo_unexecuted_blocks=1 00:25:40.476 00:25:40.476 ' 00:25:40.476 15:06:13 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:25:40.476 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:40.476 --rc genhtml_branch_coverage=1 00:25:40.476 --rc genhtml_function_coverage=1 00:25:40.476 --rc genhtml_legend=1 00:25:40.476 --rc geninfo_all_blocks=1 00:25:40.476 --rc geninfo_unexecuted_blocks=1 00:25:40.476 00:25:40.476 ' 00:25:40.476 15:06:13 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:25:40.476 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:40.476 --rc genhtml_branch_coverage=1 00:25:40.476 --rc genhtml_function_coverage=1 00:25:40.476 --rc genhtml_legend=1 00:25:40.476 --rc geninfo_all_blocks=1 00:25:40.476 --rc geninfo_unexecuted_blocks=1 00:25:40.476 00:25:40.476 ' 00:25:40.476 15:06:13 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:25:40.476 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:40.476 --rc genhtml_branch_coverage=1 00:25:40.476 --rc genhtml_function_coverage=1 00:25:40.476 --rc genhtml_legend=1 00:25:40.476 --rc geninfo_all_blocks=1 00:25:40.476 --rc geninfo_unexecuted_blocks=1 00:25:40.476 00:25:40.476 ' 00:25:40.477 15:06:13 -- target/identify_passthru.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:25:40.477 15:06:13 -- nvmf/common.sh@7 -- # uname -s 00:25:40.477 15:06:13 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:40.477 15:06:13 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:40.477 15:06:13 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:40.477 15:06:13 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:40.477 15:06:13 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:40.477 15:06:13 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:40.477 15:06:13 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:40.477 15:06:13 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:40.477 15:06:13 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:40.477 15:06:13 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:40.477 15:06:13 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:2d843004-a791-47f3-8dd7-3d04462c368b 00:25:40.477 15:06:13 -- nvmf/common.sh@18 -- # NVME_HOSTID=2d843004-a791-47f3-8dd7-3d04462c368b 00:25:40.477 15:06:13 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:40.477 15:06:13 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:40.477 15:06:13 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:25:40.477 15:06:13 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:25:40.477 15:06:13 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:40.477 15:06:13 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:40.477 15:06:13 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:40.477 15:06:13 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:40.477 15:06:13 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:40.477 15:06:13 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:40.477 15:06:13 -- paths/export.sh@5 -- # export PATH 00:25:40.477 15:06:13 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:40.477 15:06:13 -- nvmf/common.sh@46 -- # : 0 00:25:40.477 15:06:13 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:25:40.477 15:06:13 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:25:40.477 15:06:13 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:25:40.477 15:06:13 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:40.477 15:06:13 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:40.477 15:06:13 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:25:40.477 15:06:13 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:25:40.477 15:06:13 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:25:40.477 15:06:13 -- target/identify_passthru.sh@10 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:25:40.477 15:06:13 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:40.477 15:06:13 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:40.477 15:06:13 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:40.477 15:06:13 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:40.477 15:06:13 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:40.477 15:06:13 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:40.477 15:06:13 -- paths/export.sh@5 -- # export PATH 00:25:40.477 15:06:13 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:40.477 15:06:13 -- target/identify_passthru.sh@12 -- # nvmftestinit 00:25:40.477 15:06:13 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:25:40.477 15:06:13 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:40.477 15:06:13 -- nvmf/common.sh@436 -- # prepare_net_devs 00:25:40.477 15:06:13 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:25:40.477 15:06:13 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:25:40.477 15:06:13 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:40.477 15:06:13 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:25:40.477 15:06:13 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:40.477 15:06:13 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:25:40.477 15:06:13 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:25:40.477 15:06:13 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:25:40.477 15:06:13 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:25:40.477 15:06:13 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:25:40.477 15:06:13 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:25:40.477 15:06:13 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:40.477 15:06:13 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:40.477 15:06:13 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:25:40.477 15:06:13 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:25:40.477 15:06:13 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:25:40.477 15:06:13 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:25:40.477 15:06:13 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:25:40.477 15:06:13 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:40.477 15:06:13 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:25:40.477 15:06:13 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:25:40.477 15:06:13 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:25:40.477 15:06:13 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:25:40.477 15:06:13 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:25:40.477 15:06:13 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:25:40.477 Cannot find device "nvmf_tgt_br" 00:25:40.477 15:06:13 -- nvmf/common.sh@154 -- # true 00:25:40.477 15:06:13 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:25:40.477 Cannot find device "nvmf_tgt_br2" 00:25:40.477 15:06:13 -- nvmf/common.sh@155 -- # true 00:25:40.477 15:06:13 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:25:40.477 15:06:13 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:25:40.477 Cannot find device "nvmf_tgt_br" 00:25:40.477 15:06:13 -- nvmf/common.sh@157 -- # true 00:25:40.477 15:06:13 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:25:40.477 Cannot find device "nvmf_tgt_br2" 00:25:40.477 15:06:13 -- nvmf/common.sh@158 -- # true 00:25:40.477 15:06:13 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:25:40.477 15:06:13 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:25:40.477 15:06:13 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:25:40.477 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:25:40.477 15:06:13 -- nvmf/common.sh@161 -- # true 00:25:40.477 15:06:13 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:25:40.477 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:25:40.477 15:06:13 -- nvmf/common.sh@162 -- # true 00:25:40.477 15:06:13 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:25:40.477 15:06:13 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:25:40.477 15:06:13 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:25:40.477 15:06:13 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:25:40.477 15:06:13 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:25:40.477 15:06:13 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:25:40.736 15:06:13 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:25:40.736 15:06:13 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:25:40.736 15:06:13 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:25:40.736 15:06:13 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:25:40.736 15:06:13 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:25:40.736 15:06:13 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:25:40.736 15:06:13 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:25:40.736 15:06:13 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:25:40.736 15:06:13 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:25:40.736 15:06:13 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:25:40.736 15:06:13 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:25:40.736 15:06:13 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:25:40.736 15:06:13 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:25:40.736 15:06:13 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:25:40.736 15:06:13 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:25:40.736 15:06:13 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:25:40.736 15:06:13 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:25:40.736 15:06:13 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:25:40.736 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:40.736 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.094 ms 00:25:40.736 00:25:40.736 --- 10.0.0.2 ping statistics --- 00:25:40.736 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:40.736 rtt min/avg/max/mdev = 0.094/0.094/0.094/0.000 ms 00:25:40.736 15:06:13 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:25:40.736 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:25:40.736 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.043 ms 00:25:40.736 00:25:40.736 --- 10.0.0.3 ping statistics --- 00:25:40.736 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:40.736 rtt min/avg/max/mdev = 0.043/0.043/0.043/0.000 ms 00:25:40.736 15:06:13 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:25:40.736 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:40.736 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.048 ms 00:25:40.736 00:25:40.736 --- 10.0.0.1 ping statistics --- 00:25:40.737 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:40.737 rtt min/avg/max/mdev = 0.048/0.048/0.048/0.000 ms 00:25:40.737 15:06:13 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:40.737 15:06:13 -- nvmf/common.sh@421 -- # return 0 00:25:40.737 15:06:13 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:25:40.737 15:06:13 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:40.737 15:06:13 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:25:40.737 15:06:13 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:25:40.737 15:06:13 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:40.737 15:06:13 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:25:40.737 15:06:13 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:25:40.737 15:06:13 -- target/identify_passthru.sh@14 -- # timing_enter nvme_identify 00:25:40.737 15:06:13 -- common/autotest_common.sh@722 -- # xtrace_disable 00:25:40.737 15:06:13 -- common/autotest_common.sh@10 -- # set +x 00:25:40.737 15:06:13 -- target/identify_passthru.sh@16 -- # get_first_nvme_bdf 00:25:40.737 15:06:13 -- common/autotest_common.sh@1519 -- # bdfs=() 00:25:40.737 15:06:13 -- common/autotest_common.sh@1519 -- # local bdfs 00:25:40.737 15:06:13 -- common/autotest_common.sh@1520 -- # bdfs=($(get_nvme_bdfs)) 00:25:40.737 15:06:13 -- common/autotest_common.sh@1520 -- # get_nvme_bdfs 00:25:40.737 15:06:13 -- common/autotest_common.sh@1508 -- # bdfs=() 00:25:40.737 15:06:13 -- common/autotest_common.sh@1508 -- # local bdfs 00:25:40.737 15:06:13 -- common/autotest_common.sh@1509 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:25:40.737 15:06:13 -- common/autotest_common.sh@1509 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:25:40.737 15:06:13 -- common/autotest_common.sh@1509 -- # jq -r '.config[].params.traddr' 00:25:40.737 15:06:13 -- common/autotest_common.sh@1510 -- # (( 2 == 0 )) 00:25:40.737 15:06:13 -- common/autotest_common.sh@1514 -- # printf '%s\n' 0000:00:06.0 0000:00:07.0 00:25:40.737 15:06:13 -- common/autotest_common.sh@1522 -- # echo 0000:00:06.0 00:25:40.737 15:06:13 -- target/identify_passthru.sh@16 -- # bdf=0000:00:06.0 00:25:40.737 15:06:13 -- target/identify_passthru.sh@17 -- # '[' -z 0000:00:06.0 ']' 00:25:40.737 15:06:13 -- target/identify_passthru.sh@23 -- # grep 'Serial Number:' 00:25:40.737 15:06:13 -- target/identify_passthru.sh@23 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:06.0' -i 0 00:25:40.737 15:06:13 -- target/identify_passthru.sh@23 -- # awk '{print $3}' 00:25:41.005 15:06:13 -- target/identify_passthru.sh@23 -- # nvme_serial_number=12340 00:25:41.005 15:06:13 -- target/identify_passthru.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:06.0' -i 0 00:25:41.005 15:06:13 -- target/identify_passthru.sh@24 -- # grep 'Model Number:' 00:25:41.005 15:06:13 -- target/identify_passthru.sh@24 -- # awk '{print $3}' 00:25:41.286 15:06:14 -- target/identify_passthru.sh@24 -- # nvme_model_number=QEMU 00:25:41.286 15:06:14 -- target/identify_passthru.sh@26 -- # timing_exit nvme_identify 00:25:41.286 15:06:14 -- common/autotest_common.sh@728 -- # xtrace_disable 00:25:41.286 15:06:14 -- common/autotest_common.sh@10 -- # set +x 00:25:41.286 15:06:14 -- target/identify_passthru.sh@28 -- # timing_enter start_nvmf_tgt 00:25:41.286 15:06:14 -- common/autotest_common.sh@722 -- # xtrace_disable 00:25:41.286 15:06:14 -- common/autotest_common.sh@10 -- # set +x 00:25:41.286 15:06:14 -- target/identify_passthru.sh@31 -- # nvmfpid=101834 00:25:41.286 15:06:14 -- target/identify_passthru.sh@30 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:25:41.286 15:06:14 -- target/identify_passthru.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:25:41.286 15:06:14 -- target/identify_passthru.sh@35 -- # waitforlisten 101834 00:25:41.286 15:06:14 -- common/autotest_common.sh@829 -- # '[' -z 101834 ']' 00:25:41.286 15:06:14 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:41.286 15:06:14 -- common/autotest_common.sh@834 -- # local max_retries=100 00:25:41.286 15:06:14 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:41.286 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:41.286 15:06:14 -- common/autotest_common.sh@838 -- # xtrace_disable 00:25:41.286 15:06:14 -- common/autotest_common.sh@10 -- # set +x 00:25:41.286 [2024-12-01 15:06:14.286221] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:25:41.286 [2024-12-01 15:06:14.286315] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:41.557 [2024-12-01 15:06:14.430007] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:25:41.557 [2024-12-01 15:06:14.533182] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:25:41.557 [2024-12-01 15:06:14.533381] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:41.557 [2024-12-01 15:06:14.533398] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:41.557 [2024-12-01 15:06:14.533410] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:41.557 [2024-12-01 15:06:14.533574] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:25:41.557 [2024-12-01 15:06:14.534307] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:25:41.557 [2024-12-01 15:06:14.534356] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:25:41.557 [2024-12-01 15:06:14.534366] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:42.125 15:06:15 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:25:42.125 15:06:15 -- common/autotest_common.sh@862 -- # return 0 00:25:42.125 15:06:15 -- target/identify_passthru.sh@36 -- # rpc_cmd -v nvmf_set_config --passthru-identify-ctrlr 00:25:42.125 15:06:15 -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:42.125 15:06:15 -- common/autotest_common.sh@10 -- # set +x 00:25:42.125 15:06:15 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:42.125 15:06:15 -- target/identify_passthru.sh@37 -- # rpc_cmd -v framework_start_init 00:25:42.125 15:06:15 -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:42.125 15:06:15 -- common/autotest_common.sh@10 -- # set +x 00:25:42.383 [2024-12-01 15:06:15.344292] nvmf_tgt.c: 423:nvmf_tgt_advance_state: *NOTICE*: Custom identify ctrlr handler enabled 00:25:42.383 15:06:15 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:42.383 15:06:15 -- target/identify_passthru.sh@38 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:25:42.383 15:06:15 -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:42.383 15:06:15 -- common/autotest_common.sh@10 -- # set +x 00:25:42.383 [2024-12-01 15:06:15.359238] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:42.383 15:06:15 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:42.383 15:06:15 -- target/identify_passthru.sh@39 -- # timing_exit start_nvmf_tgt 00:25:42.383 15:06:15 -- common/autotest_common.sh@728 -- # xtrace_disable 00:25:42.383 15:06:15 -- common/autotest_common.sh@10 -- # set +x 00:25:42.383 15:06:15 -- target/identify_passthru.sh@41 -- # rpc_cmd bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:00:06.0 00:25:42.383 15:06:15 -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:42.383 15:06:15 -- common/autotest_common.sh@10 -- # set +x 00:25:42.383 Nvme0n1 00:25:42.383 15:06:15 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:42.383 15:06:15 -- target/identify_passthru.sh@42 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 1 00:25:42.383 15:06:15 -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:42.383 15:06:15 -- common/autotest_common.sh@10 -- # set +x 00:25:42.383 15:06:15 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:42.383 15:06:15 -- target/identify_passthru.sh@43 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:25:42.383 15:06:15 -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:42.383 15:06:15 -- common/autotest_common.sh@10 -- # set +x 00:25:42.641 15:06:15 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:42.641 15:06:15 -- target/identify_passthru.sh@44 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:42.641 15:06:15 -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:42.641 15:06:15 -- common/autotest_common.sh@10 -- # set +x 00:25:42.641 [2024-12-01 15:06:15.510443] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:42.641 15:06:15 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:42.641 15:06:15 -- target/identify_passthru.sh@46 -- # rpc_cmd nvmf_get_subsystems 00:25:42.641 15:06:15 -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:42.641 15:06:15 -- common/autotest_common.sh@10 -- # set +x 00:25:42.641 [2024-12-01 15:06:15.518161] nvmf_rpc.c: 275:rpc_nvmf_get_subsystems: *WARNING*: rpc_nvmf_get_subsystems: deprecated feature listener.transport is deprecated in favor of trtype to be removed in v24.05 00:25:42.641 [ 00:25:42.641 { 00:25:42.641 "allow_any_host": true, 00:25:42.641 "hosts": [], 00:25:42.641 "listen_addresses": [], 00:25:42.641 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:25:42.641 "subtype": "Discovery" 00:25:42.641 }, 00:25:42.641 { 00:25:42.641 "allow_any_host": true, 00:25:42.641 "hosts": [], 00:25:42.641 "listen_addresses": [ 00:25:42.641 { 00:25:42.641 "adrfam": "IPv4", 00:25:42.641 "traddr": "10.0.0.2", 00:25:42.641 "transport": "TCP", 00:25:42.641 "trsvcid": "4420", 00:25:42.641 "trtype": "TCP" 00:25:42.641 } 00:25:42.641 ], 00:25:42.641 "max_cntlid": 65519, 00:25:42.641 "max_namespaces": 1, 00:25:42.641 "min_cntlid": 1, 00:25:42.641 "model_number": "SPDK bdev Controller", 00:25:42.641 "namespaces": [ 00:25:42.641 { 00:25:42.641 "bdev_name": "Nvme0n1", 00:25:42.641 "name": "Nvme0n1", 00:25:42.641 "nguid": "54C8AB67A3E940B08E35829F91FFA6F4", 00:25:42.641 "nsid": 1, 00:25:42.641 "uuid": "54c8ab67-a3e9-40b0-8e35-829f91ffa6f4" 00:25:42.641 } 00:25:42.641 ], 00:25:42.641 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:25:42.641 "serial_number": "SPDK00000000000001", 00:25:42.641 "subtype": "NVMe" 00:25:42.641 } 00:25:42.641 ] 00:25:42.641 15:06:15 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:42.641 15:06:15 -- target/identify_passthru.sh@54 -- # grep 'Serial Number:' 00:25:42.641 15:06:15 -- target/identify_passthru.sh@54 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:25:42.641 15:06:15 -- target/identify_passthru.sh@54 -- # awk '{print $3}' 00:25:42.641 15:06:15 -- target/identify_passthru.sh@54 -- # nvmf_serial_number=12340 00:25:42.641 15:06:15 -- target/identify_passthru.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:25:42.641 15:06:15 -- target/identify_passthru.sh@61 -- # grep 'Model Number:' 00:25:42.641 15:06:15 -- target/identify_passthru.sh@61 -- # awk '{print $3}' 00:25:42.898 15:06:15 -- target/identify_passthru.sh@61 -- # nvmf_model_number=QEMU 00:25:42.898 15:06:15 -- target/identify_passthru.sh@63 -- # '[' 12340 '!=' 12340 ']' 00:25:42.898 15:06:15 -- target/identify_passthru.sh@68 -- # '[' QEMU '!=' QEMU ']' 00:25:42.898 15:06:15 -- target/identify_passthru.sh@73 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:25:42.898 15:06:15 -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:42.898 15:06:15 -- common/autotest_common.sh@10 -- # set +x 00:25:42.898 15:06:15 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:42.898 15:06:15 -- target/identify_passthru.sh@75 -- # trap - SIGINT SIGTERM EXIT 00:25:42.898 15:06:15 -- target/identify_passthru.sh@77 -- # nvmftestfini 00:25:42.898 15:06:15 -- nvmf/common.sh@476 -- # nvmfcleanup 00:25:42.898 15:06:15 -- nvmf/common.sh@116 -- # sync 00:25:43.156 15:06:16 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:25:43.156 15:06:16 -- nvmf/common.sh@119 -- # set +e 00:25:43.156 15:06:16 -- nvmf/common.sh@120 -- # for i in {1..20} 00:25:43.156 15:06:16 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:25:43.156 rmmod nvme_tcp 00:25:43.156 rmmod nvme_fabrics 00:25:43.156 rmmod nvme_keyring 00:25:43.156 15:06:16 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:25:43.156 15:06:16 -- nvmf/common.sh@123 -- # set -e 00:25:43.156 15:06:16 -- nvmf/common.sh@124 -- # return 0 00:25:43.156 15:06:16 -- nvmf/common.sh@477 -- # '[' -n 101834 ']' 00:25:43.156 15:06:16 -- nvmf/common.sh@478 -- # killprocess 101834 00:25:43.156 15:06:16 -- common/autotest_common.sh@936 -- # '[' -z 101834 ']' 00:25:43.156 15:06:16 -- common/autotest_common.sh@940 -- # kill -0 101834 00:25:43.156 15:06:16 -- common/autotest_common.sh@941 -- # uname 00:25:43.156 15:06:16 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:25:43.156 15:06:16 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 101834 00:25:43.156 15:06:16 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:25:43.156 15:06:16 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:25:43.156 killing process with pid 101834 00:25:43.156 15:06:16 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 101834' 00:25:43.156 15:06:16 -- common/autotest_common.sh@955 -- # kill 101834 00:25:43.156 [2024-12-01 15:06:16.147393] app.c: 883:log_deprecation_hits: *WARNING*: rpc_nvmf_get_subsystems: deprecation 'listener.transport is deprecated in favor of trtype' scheduled for removal in v24.05 hit 1 times 00:25:43.156 15:06:16 -- common/autotest_common.sh@960 -- # wait 101834 00:25:43.415 15:06:16 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:25:43.415 15:06:16 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:25:43.415 15:06:16 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:25:43.415 15:06:16 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:25:43.415 15:06:16 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:25:43.415 15:06:16 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:43.415 15:06:16 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:25:43.415 15:06:16 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:43.415 15:06:16 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:25:43.415 00:25:43.415 real 0m3.241s 00:25:43.415 user 0m7.866s 00:25:43.415 sys 0m0.898s 00:25:43.415 15:06:16 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:25:43.415 15:06:16 -- common/autotest_common.sh@10 -- # set +x 00:25:43.415 ************************************ 00:25:43.415 END TEST nvmf_identify_passthru 00:25:43.415 ************************************ 00:25:43.415 15:06:16 -- spdk/autotest.sh@287 -- # run_test nvmf_dif /home/vagrant/spdk_repo/spdk/test/nvmf/target/dif.sh 00:25:43.415 15:06:16 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:25:43.415 15:06:16 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:25:43.415 15:06:16 -- common/autotest_common.sh@10 -- # set +x 00:25:43.415 ************************************ 00:25:43.415 START TEST nvmf_dif 00:25:43.415 ************************************ 00:25:43.415 15:06:16 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/dif.sh 00:25:43.674 * Looking for test storage... 00:25:43.674 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:25:43.674 15:06:16 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:25:43.674 15:06:16 -- common/autotest_common.sh@1690 -- # lcov --version 00:25:43.674 15:06:16 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:25:43.674 15:06:16 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:25:43.674 15:06:16 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:25:43.674 15:06:16 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:25:43.674 15:06:16 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:25:43.674 15:06:16 -- scripts/common.sh@335 -- # IFS=.-: 00:25:43.674 15:06:16 -- scripts/common.sh@335 -- # read -ra ver1 00:25:43.674 15:06:16 -- scripts/common.sh@336 -- # IFS=.-: 00:25:43.674 15:06:16 -- scripts/common.sh@336 -- # read -ra ver2 00:25:43.674 15:06:16 -- scripts/common.sh@337 -- # local 'op=<' 00:25:43.674 15:06:16 -- scripts/common.sh@339 -- # ver1_l=2 00:25:43.674 15:06:16 -- scripts/common.sh@340 -- # ver2_l=1 00:25:43.674 15:06:16 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:25:43.674 15:06:16 -- scripts/common.sh@343 -- # case "$op" in 00:25:43.674 15:06:16 -- scripts/common.sh@344 -- # : 1 00:25:43.674 15:06:16 -- scripts/common.sh@363 -- # (( v = 0 )) 00:25:43.674 15:06:16 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:43.674 15:06:16 -- scripts/common.sh@364 -- # decimal 1 00:25:43.674 15:06:16 -- scripts/common.sh@352 -- # local d=1 00:25:43.674 15:06:16 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:25:43.674 15:06:16 -- scripts/common.sh@354 -- # echo 1 00:25:43.674 15:06:16 -- scripts/common.sh@364 -- # ver1[v]=1 00:25:43.674 15:06:16 -- scripts/common.sh@365 -- # decimal 2 00:25:43.674 15:06:16 -- scripts/common.sh@352 -- # local d=2 00:25:43.674 15:06:16 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:25:43.674 15:06:16 -- scripts/common.sh@354 -- # echo 2 00:25:43.674 15:06:16 -- scripts/common.sh@365 -- # ver2[v]=2 00:25:43.674 15:06:16 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:25:43.674 15:06:16 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:25:43.674 15:06:16 -- scripts/common.sh@367 -- # return 0 00:25:43.674 15:06:16 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:25:43.674 15:06:16 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:25:43.674 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:43.674 --rc genhtml_branch_coverage=1 00:25:43.674 --rc genhtml_function_coverage=1 00:25:43.674 --rc genhtml_legend=1 00:25:43.674 --rc geninfo_all_blocks=1 00:25:43.674 --rc geninfo_unexecuted_blocks=1 00:25:43.674 00:25:43.674 ' 00:25:43.674 15:06:16 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:25:43.674 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:43.674 --rc genhtml_branch_coverage=1 00:25:43.674 --rc genhtml_function_coverage=1 00:25:43.674 --rc genhtml_legend=1 00:25:43.674 --rc geninfo_all_blocks=1 00:25:43.674 --rc geninfo_unexecuted_blocks=1 00:25:43.674 00:25:43.674 ' 00:25:43.674 15:06:16 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:25:43.674 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:43.674 --rc genhtml_branch_coverage=1 00:25:43.674 --rc genhtml_function_coverage=1 00:25:43.674 --rc genhtml_legend=1 00:25:43.674 --rc geninfo_all_blocks=1 00:25:43.674 --rc geninfo_unexecuted_blocks=1 00:25:43.674 00:25:43.674 ' 00:25:43.674 15:06:16 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:25:43.674 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:43.674 --rc genhtml_branch_coverage=1 00:25:43.674 --rc genhtml_function_coverage=1 00:25:43.674 --rc genhtml_legend=1 00:25:43.674 --rc geninfo_all_blocks=1 00:25:43.674 --rc geninfo_unexecuted_blocks=1 00:25:43.674 00:25:43.674 ' 00:25:43.674 15:06:16 -- target/dif.sh@13 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:25:43.674 15:06:16 -- nvmf/common.sh@7 -- # uname -s 00:25:43.674 15:06:16 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:43.674 15:06:16 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:43.674 15:06:16 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:43.674 15:06:16 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:43.674 15:06:16 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:43.674 15:06:16 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:43.674 15:06:16 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:43.674 15:06:16 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:43.674 15:06:16 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:43.674 15:06:16 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:43.674 15:06:16 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:2d843004-a791-47f3-8dd7-3d04462c368b 00:25:43.674 15:06:16 -- nvmf/common.sh@18 -- # NVME_HOSTID=2d843004-a791-47f3-8dd7-3d04462c368b 00:25:43.674 15:06:16 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:43.674 15:06:16 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:43.674 15:06:16 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:25:43.674 15:06:16 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:25:43.674 15:06:16 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:43.675 15:06:16 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:43.675 15:06:16 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:43.675 15:06:16 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:43.675 15:06:16 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:43.675 15:06:16 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:43.675 15:06:16 -- paths/export.sh@5 -- # export PATH 00:25:43.675 15:06:16 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:43.675 15:06:16 -- nvmf/common.sh@46 -- # : 0 00:25:43.675 15:06:16 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:25:43.675 15:06:16 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:25:43.675 15:06:16 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:25:43.675 15:06:16 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:43.675 15:06:16 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:43.675 15:06:16 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:25:43.675 15:06:16 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:25:43.675 15:06:16 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:25:43.675 15:06:16 -- target/dif.sh@15 -- # NULL_META=16 00:25:43.675 15:06:16 -- target/dif.sh@15 -- # NULL_BLOCK_SIZE=512 00:25:43.675 15:06:16 -- target/dif.sh@15 -- # NULL_SIZE=64 00:25:43.675 15:06:16 -- target/dif.sh@15 -- # NULL_DIF=1 00:25:43.675 15:06:16 -- target/dif.sh@135 -- # nvmftestinit 00:25:43.675 15:06:16 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:25:43.675 15:06:16 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:43.675 15:06:16 -- nvmf/common.sh@436 -- # prepare_net_devs 00:25:43.675 15:06:16 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:25:43.675 15:06:16 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:25:43.675 15:06:16 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:43.675 15:06:16 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:25:43.675 15:06:16 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:43.675 15:06:16 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:25:43.675 15:06:16 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:25:43.675 15:06:16 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:25:43.675 15:06:16 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:25:43.675 15:06:16 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:25:43.675 15:06:16 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:25:43.675 15:06:16 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:43.675 15:06:16 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:43.675 15:06:16 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:25:43.675 15:06:16 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:25:43.675 15:06:16 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:25:43.675 15:06:16 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:25:43.675 15:06:16 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:25:43.675 15:06:16 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:43.675 15:06:16 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:25:43.675 15:06:16 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:25:43.675 15:06:16 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:25:43.675 15:06:16 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:25:43.675 15:06:16 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:25:43.675 15:06:16 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:25:43.675 Cannot find device "nvmf_tgt_br" 00:25:43.675 15:06:16 -- nvmf/common.sh@154 -- # true 00:25:43.675 15:06:16 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:25:43.675 Cannot find device "nvmf_tgt_br2" 00:25:43.675 15:06:16 -- nvmf/common.sh@155 -- # true 00:25:43.675 15:06:16 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:25:43.675 15:06:16 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:25:43.675 Cannot find device "nvmf_tgt_br" 00:25:43.675 15:06:16 -- nvmf/common.sh@157 -- # true 00:25:43.675 15:06:16 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:25:43.675 Cannot find device "nvmf_tgt_br2" 00:25:43.675 15:06:16 -- nvmf/common.sh@158 -- # true 00:25:43.675 15:06:16 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:25:43.933 15:06:16 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:25:43.933 15:06:16 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:25:43.933 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:25:43.933 15:06:16 -- nvmf/common.sh@161 -- # true 00:25:43.933 15:06:16 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:25:43.933 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:25:43.933 15:06:16 -- nvmf/common.sh@162 -- # true 00:25:43.933 15:06:16 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:25:43.933 15:06:16 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:25:43.933 15:06:16 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:25:43.933 15:06:16 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:25:43.933 15:06:16 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:25:43.933 15:06:16 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:25:43.933 15:06:16 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:25:43.933 15:06:16 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:25:43.933 15:06:16 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:25:43.933 15:06:16 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:25:43.933 15:06:16 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:25:43.933 15:06:16 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:25:43.933 15:06:16 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:25:43.933 15:06:16 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:25:43.933 15:06:16 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:25:43.933 15:06:16 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:25:43.933 15:06:16 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:25:43.933 15:06:16 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:25:43.933 15:06:16 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:25:43.933 15:06:16 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:25:43.934 15:06:16 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:25:43.934 15:06:16 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:25:43.934 15:06:16 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:25:43.934 15:06:16 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:25:43.934 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:43.934 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.066 ms 00:25:43.934 00:25:43.934 --- 10.0.0.2 ping statistics --- 00:25:43.934 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:43.934 rtt min/avg/max/mdev = 0.066/0.066/0.066/0.000 ms 00:25:43.934 15:06:16 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:25:43.934 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:25:43.934 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.073 ms 00:25:43.934 00:25:43.934 --- 10.0.0.3 ping statistics --- 00:25:43.934 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:43.934 rtt min/avg/max/mdev = 0.073/0.073/0.073/0.000 ms 00:25:43.934 15:06:16 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:25:43.934 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:43.934 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.029 ms 00:25:43.934 00:25:43.934 --- 10.0.0.1 ping statistics --- 00:25:43.934 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:43.934 rtt min/avg/max/mdev = 0.029/0.029/0.029/0.000 ms 00:25:43.934 15:06:16 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:43.934 15:06:16 -- nvmf/common.sh@421 -- # return 0 00:25:43.934 15:06:16 -- nvmf/common.sh@438 -- # '[' iso == iso ']' 00:25:43.934 15:06:16 -- nvmf/common.sh@439 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:25:44.193 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:25:44.451 0000:00:06.0 (1b36 0010): Already using the uio_pci_generic driver 00:25:44.451 0000:00:07.0 (1b36 0010): Already using the uio_pci_generic driver 00:25:44.451 15:06:17 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:44.451 15:06:17 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:25:44.451 15:06:17 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:25:44.451 15:06:17 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:44.451 15:06:17 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:25:44.451 15:06:17 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:25:44.451 15:06:17 -- target/dif.sh@136 -- # NVMF_TRANSPORT_OPTS+=' --dif-insert-or-strip' 00:25:44.451 15:06:17 -- target/dif.sh@137 -- # nvmfappstart 00:25:44.451 15:06:17 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:25:44.451 15:06:17 -- common/autotest_common.sh@722 -- # xtrace_disable 00:25:44.451 15:06:17 -- common/autotest_common.sh@10 -- # set +x 00:25:44.451 15:06:17 -- nvmf/common.sh@469 -- # nvmfpid=102191 00:25:44.451 15:06:17 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:25:44.451 15:06:17 -- nvmf/common.sh@470 -- # waitforlisten 102191 00:25:44.451 15:06:17 -- common/autotest_common.sh@829 -- # '[' -z 102191 ']' 00:25:44.451 15:06:17 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:44.451 15:06:17 -- common/autotest_common.sh@834 -- # local max_retries=100 00:25:44.451 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:44.451 15:06:17 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:44.451 15:06:17 -- common/autotest_common.sh@838 -- # xtrace_disable 00:25:44.451 15:06:17 -- common/autotest_common.sh@10 -- # set +x 00:25:44.451 [2024-12-01 15:06:17.483154] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:25:44.452 [2024-12-01 15:06:17.483262] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:44.710 [2024-12-01 15:06:17.625703] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:44.710 [2024-12-01 15:06:17.713600] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:25:44.710 [2024-12-01 15:06:17.713811] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:44.710 [2024-12-01 15:06:17.713826] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:44.710 [2024-12-01 15:06:17.713836] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:44.710 [2024-12-01 15:06:17.713871] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:45.646 15:06:18 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:25:45.646 15:06:18 -- common/autotest_common.sh@862 -- # return 0 00:25:45.646 15:06:18 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:25:45.646 15:06:18 -- common/autotest_common.sh@728 -- # xtrace_disable 00:25:45.646 15:06:18 -- common/autotest_common.sh@10 -- # set +x 00:25:45.646 15:06:18 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:45.646 15:06:18 -- target/dif.sh@139 -- # create_transport 00:25:45.646 15:06:18 -- target/dif.sh@50 -- # rpc_cmd nvmf_create_transport -t tcp -o --dif-insert-or-strip 00:25:45.646 15:06:18 -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:45.646 15:06:18 -- common/autotest_common.sh@10 -- # set +x 00:25:45.646 [2024-12-01 15:06:18.481345] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:45.646 15:06:18 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:45.646 15:06:18 -- target/dif.sh@141 -- # run_test fio_dif_1_default fio_dif_1 00:25:45.646 15:06:18 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:25:45.646 15:06:18 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:25:45.646 15:06:18 -- common/autotest_common.sh@10 -- # set +x 00:25:45.646 ************************************ 00:25:45.646 START TEST fio_dif_1_default 00:25:45.646 ************************************ 00:25:45.646 15:06:18 -- common/autotest_common.sh@1114 -- # fio_dif_1 00:25:45.646 15:06:18 -- target/dif.sh@86 -- # create_subsystems 0 00:25:45.646 15:06:18 -- target/dif.sh@28 -- # local sub 00:25:45.646 15:06:18 -- target/dif.sh@30 -- # for sub in "$@" 00:25:45.646 15:06:18 -- target/dif.sh@31 -- # create_subsystem 0 00:25:45.646 15:06:18 -- target/dif.sh@18 -- # local sub_id=0 00:25:45.646 15:06:18 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:25:45.646 15:06:18 -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:45.646 15:06:18 -- common/autotest_common.sh@10 -- # set +x 00:25:45.646 bdev_null0 00:25:45.646 15:06:18 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:45.646 15:06:18 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:25:45.646 15:06:18 -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:45.646 15:06:18 -- common/autotest_common.sh@10 -- # set +x 00:25:45.646 15:06:18 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:45.646 15:06:18 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:25:45.646 15:06:18 -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:45.646 15:06:18 -- common/autotest_common.sh@10 -- # set +x 00:25:45.646 15:06:18 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:45.646 15:06:18 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:25:45.646 15:06:18 -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:45.646 15:06:18 -- common/autotest_common.sh@10 -- # set +x 00:25:45.646 [2024-12-01 15:06:18.525475] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:45.646 15:06:18 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:45.646 15:06:18 -- target/dif.sh@87 -- # fio /dev/fd/62 00:25:45.646 15:06:18 -- target/dif.sh@87 -- # create_json_sub_conf 0 00:25:45.646 15:06:18 -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:25:45.646 15:06:18 -- nvmf/common.sh@520 -- # config=() 00:25:45.646 15:06:18 -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:25:45.646 15:06:18 -- nvmf/common.sh@520 -- # local subsystem config 00:25:45.646 15:06:18 -- common/autotest_common.sh@1345 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:25:45.646 15:06:18 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:25:45.646 15:06:18 -- target/dif.sh@82 -- # gen_fio_conf 00:25:45.646 15:06:18 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:25:45.646 { 00:25:45.646 "params": { 00:25:45.646 "name": "Nvme$subsystem", 00:25:45.646 "trtype": "$TEST_TRANSPORT", 00:25:45.646 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:45.646 "adrfam": "ipv4", 00:25:45.646 "trsvcid": "$NVMF_PORT", 00:25:45.646 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:45.646 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:45.646 "hdgst": ${hdgst:-false}, 00:25:45.646 "ddgst": ${ddgst:-false} 00:25:45.646 }, 00:25:45.646 "method": "bdev_nvme_attach_controller" 00:25:45.646 } 00:25:45.646 EOF 00:25:45.646 )") 00:25:45.646 15:06:18 -- common/autotest_common.sh@1326 -- # local fio_dir=/usr/src/fio 00:25:45.646 15:06:18 -- target/dif.sh@54 -- # local file 00:25:45.646 15:06:18 -- common/autotest_common.sh@1328 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:25:45.646 15:06:18 -- common/autotest_common.sh@1328 -- # local sanitizers 00:25:45.646 15:06:18 -- target/dif.sh@56 -- # cat 00:25:45.647 15:06:18 -- common/autotest_common.sh@1329 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:25:45.647 15:06:18 -- common/autotest_common.sh@1330 -- # shift 00:25:45.647 15:06:18 -- common/autotest_common.sh@1332 -- # local asan_lib= 00:25:45.647 15:06:18 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:25:45.647 15:06:18 -- nvmf/common.sh@542 -- # cat 00:25:45.647 15:06:18 -- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:25:45.647 15:06:18 -- common/autotest_common.sh@1334 -- # grep libasan 00:25:45.647 15:06:18 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:25:45.647 15:06:18 -- target/dif.sh@72 -- # (( file = 1 )) 00:25:45.647 15:06:18 -- target/dif.sh@72 -- # (( file <= files )) 00:25:45.647 15:06:18 -- nvmf/common.sh@544 -- # jq . 00:25:45.647 15:06:18 -- nvmf/common.sh@545 -- # IFS=, 00:25:45.647 15:06:18 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:25:45.647 "params": { 00:25:45.647 "name": "Nvme0", 00:25:45.647 "trtype": "tcp", 00:25:45.647 "traddr": "10.0.0.2", 00:25:45.647 "adrfam": "ipv4", 00:25:45.647 "trsvcid": "4420", 00:25:45.647 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:25:45.647 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:25:45.647 "hdgst": false, 00:25:45.647 "ddgst": false 00:25:45.647 }, 00:25:45.647 "method": "bdev_nvme_attach_controller" 00:25:45.647 }' 00:25:45.647 15:06:18 -- common/autotest_common.sh@1334 -- # asan_lib= 00:25:45.647 15:06:18 -- common/autotest_common.sh@1335 -- # [[ -n '' ]] 00:25:45.647 15:06:18 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:25:45.647 15:06:18 -- common/autotest_common.sh@1334 -- # grep libclang_rt.asan 00:25:45.647 15:06:18 -- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:25:45.647 15:06:18 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:25:45.647 15:06:18 -- common/autotest_common.sh@1334 -- # asan_lib= 00:25:45.647 15:06:18 -- common/autotest_common.sh@1335 -- # [[ -n '' ]] 00:25:45.647 15:06:18 -- common/autotest_common.sh@1341 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:25:45.647 15:06:18 -- common/autotest_common.sh@1341 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:25:45.647 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:25:45.647 fio-3.35 00:25:45.647 Starting 1 thread 00:25:46.215 [2024-12-01 15:06:19.171083] rpc.c: 181:spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:25:46.215 [2024-12-01 15:06:19.171166] rpc.c: 90:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:25:58.421 00:25:58.421 filename0: (groupid=0, jobs=1): err= 0: pid=102270: Sun Dec 1 15:06:29 2024 00:25:58.421 read: IOPS=6594, BW=25.8MiB/s (27.0MB/s)(258MiB/10023msec) 00:25:58.421 slat (nsec): min=5753, max=53997, avg=6722.87, stdev=1892.36 00:25:58.421 clat (usec): min=345, max=41475, avg=586.66, stdev=2892.38 00:25:58.421 lat (usec): min=351, max=41483, avg=593.38, stdev=2892.43 00:25:58.421 clat percentiles (usec): 00:25:58.421 | 1.00th=[ 351], 5.00th=[ 355], 10.00th=[ 359], 20.00th=[ 363], 00:25:58.421 | 30.00th=[ 367], 40.00th=[ 371], 50.00th=[ 375], 60.00th=[ 379], 00:25:58.421 | 70.00th=[ 383], 80.00th=[ 392], 90.00th=[ 412], 95.00th=[ 433], 00:25:58.421 | 99.00th=[ 494], 99.50th=[40109], 99.90th=[41157], 99.95th=[41157], 00:25:58.421 | 99.99th=[41681] 00:25:58.421 bw ( KiB/s): min=16447, max=30944, per=100.00%, avg=26433.85, stdev=3784.50, samples=20 00:25:58.421 iops : min= 4111, max= 7736, avg=6608.40, stdev=946.21, samples=20 00:25:58.421 lat (usec) : 500=99.07%, 750=0.38%, 1000=0.01% 00:25:58.421 lat (msec) : 2=0.02%, 10=0.01%, 50=0.51% 00:25:58.421 cpu : usr=87.60%, sys=9.94%, ctx=34, majf=0, minf=0 00:25:58.421 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:25:58.421 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:58.421 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:58.421 issued rwts: total=66100,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:58.421 latency : target=0, window=0, percentile=100.00%, depth=4 00:25:58.421 00:25:58.421 Run status group 0 (all jobs): 00:25:58.421 READ: bw=25.8MiB/s (27.0MB/s), 25.8MiB/s-25.8MiB/s (27.0MB/s-27.0MB/s), io=258MiB (271MB), run=10023-10023msec 00:25:58.422 15:06:29 -- target/dif.sh@88 -- # destroy_subsystems 0 00:25:58.422 15:06:29 -- target/dif.sh@43 -- # local sub 00:25:58.422 15:06:29 -- target/dif.sh@45 -- # for sub in "$@" 00:25:58.422 15:06:29 -- target/dif.sh@46 -- # destroy_subsystem 0 00:25:58.422 15:06:29 -- target/dif.sh@36 -- # local sub_id=0 00:25:58.422 15:06:29 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:25:58.422 15:06:29 -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:58.422 15:06:29 -- common/autotest_common.sh@10 -- # set +x 00:25:58.422 15:06:29 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:58.422 15:06:29 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:25:58.422 15:06:29 -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:58.422 15:06:29 -- common/autotest_common.sh@10 -- # set +x 00:25:58.422 15:06:29 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:58.422 00:25:58.422 real 0m11.049s 00:25:58.422 user 0m9.407s 00:25:58.422 sys 0m1.298s 00:25:58.422 15:06:29 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:25:58.422 15:06:29 -- common/autotest_common.sh@10 -- # set +x 00:25:58.422 ************************************ 00:25:58.422 END TEST fio_dif_1_default 00:25:58.422 ************************************ 00:25:58.422 15:06:29 -- target/dif.sh@142 -- # run_test fio_dif_1_multi_subsystems fio_dif_1_multi_subsystems 00:25:58.422 15:06:29 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:25:58.422 15:06:29 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:25:58.422 15:06:29 -- common/autotest_common.sh@10 -- # set +x 00:25:58.422 ************************************ 00:25:58.422 START TEST fio_dif_1_multi_subsystems 00:25:58.422 ************************************ 00:25:58.422 15:06:29 -- common/autotest_common.sh@1114 -- # fio_dif_1_multi_subsystems 00:25:58.422 15:06:29 -- target/dif.sh@92 -- # local files=1 00:25:58.422 15:06:29 -- target/dif.sh@94 -- # create_subsystems 0 1 00:25:58.422 15:06:29 -- target/dif.sh@28 -- # local sub 00:25:58.422 15:06:29 -- target/dif.sh@30 -- # for sub in "$@" 00:25:58.422 15:06:29 -- target/dif.sh@31 -- # create_subsystem 0 00:25:58.422 15:06:29 -- target/dif.sh@18 -- # local sub_id=0 00:25:58.422 15:06:29 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:25:58.422 15:06:29 -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:58.422 15:06:29 -- common/autotest_common.sh@10 -- # set +x 00:25:58.422 bdev_null0 00:25:58.422 15:06:29 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:58.422 15:06:29 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:25:58.422 15:06:29 -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:58.422 15:06:29 -- common/autotest_common.sh@10 -- # set +x 00:25:58.422 15:06:29 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:58.422 15:06:29 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:25:58.422 15:06:29 -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:58.422 15:06:29 -- common/autotest_common.sh@10 -- # set +x 00:25:58.422 15:06:29 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:58.422 15:06:29 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:25:58.422 15:06:29 -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:58.422 15:06:29 -- common/autotest_common.sh@10 -- # set +x 00:25:58.422 [2024-12-01 15:06:29.626092] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:58.422 15:06:29 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:58.422 15:06:29 -- target/dif.sh@30 -- # for sub in "$@" 00:25:58.422 15:06:29 -- target/dif.sh@31 -- # create_subsystem 1 00:25:58.422 15:06:29 -- target/dif.sh@18 -- # local sub_id=1 00:25:58.422 15:06:29 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:25:58.422 15:06:29 -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:58.422 15:06:29 -- common/autotest_common.sh@10 -- # set +x 00:25:58.422 bdev_null1 00:25:58.422 15:06:29 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:58.422 15:06:29 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:25:58.422 15:06:29 -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:58.422 15:06:29 -- common/autotest_common.sh@10 -- # set +x 00:25:58.422 15:06:29 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:58.422 15:06:29 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:25:58.422 15:06:29 -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:58.422 15:06:29 -- common/autotest_common.sh@10 -- # set +x 00:25:58.422 15:06:29 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:58.422 15:06:29 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:58.422 15:06:29 -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:58.422 15:06:29 -- common/autotest_common.sh@10 -- # set +x 00:25:58.422 15:06:29 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:58.422 15:06:29 -- target/dif.sh@95 -- # fio /dev/fd/62 00:25:58.422 15:06:29 -- target/dif.sh@95 -- # create_json_sub_conf 0 1 00:25:58.422 15:06:29 -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:25:58.422 15:06:29 -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:25:58.422 15:06:29 -- nvmf/common.sh@520 -- # config=() 00:25:58.422 15:06:29 -- common/autotest_common.sh@1345 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:25:58.422 15:06:29 -- nvmf/common.sh@520 -- # local subsystem config 00:25:58.422 15:06:29 -- common/autotest_common.sh@1326 -- # local fio_dir=/usr/src/fio 00:25:58.422 15:06:29 -- target/dif.sh@82 -- # gen_fio_conf 00:25:58.422 15:06:29 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:25:58.422 15:06:29 -- common/autotest_common.sh@1328 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:25:58.422 15:06:29 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:25:58.422 { 00:25:58.422 "params": { 00:25:58.422 "name": "Nvme$subsystem", 00:25:58.422 "trtype": "$TEST_TRANSPORT", 00:25:58.422 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:58.422 "adrfam": "ipv4", 00:25:58.422 "trsvcid": "$NVMF_PORT", 00:25:58.422 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:58.422 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:58.422 "hdgst": ${hdgst:-false}, 00:25:58.422 "ddgst": ${ddgst:-false} 00:25:58.422 }, 00:25:58.422 "method": "bdev_nvme_attach_controller" 00:25:58.422 } 00:25:58.422 EOF 00:25:58.422 )") 00:25:58.422 15:06:29 -- common/autotest_common.sh@1328 -- # local sanitizers 00:25:58.422 15:06:29 -- target/dif.sh@54 -- # local file 00:25:58.422 15:06:29 -- common/autotest_common.sh@1329 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:25:58.422 15:06:29 -- target/dif.sh@56 -- # cat 00:25:58.422 15:06:29 -- common/autotest_common.sh@1330 -- # shift 00:25:58.422 15:06:29 -- common/autotest_common.sh@1332 -- # local asan_lib= 00:25:58.422 15:06:29 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:25:58.422 15:06:29 -- nvmf/common.sh@542 -- # cat 00:25:58.422 15:06:29 -- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:25:58.422 15:06:29 -- target/dif.sh@72 -- # (( file = 1 )) 00:25:58.422 15:06:29 -- target/dif.sh@72 -- # (( file <= files )) 00:25:58.422 15:06:29 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:25:58.422 15:06:29 -- common/autotest_common.sh@1334 -- # grep libasan 00:25:58.422 15:06:29 -- target/dif.sh@73 -- # cat 00:25:58.422 15:06:29 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:25:58.422 15:06:29 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:25:58.422 { 00:25:58.422 "params": { 00:25:58.422 "name": "Nvme$subsystem", 00:25:58.422 "trtype": "$TEST_TRANSPORT", 00:25:58.422 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:58.422 "adrfam": "ipv4", 00:25:58.422 "trsvcid": "$NVMF_PORT", 00:25:58.422 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:58.422 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:58.422 "hdgst": ${hdgst:-false}, 00:25:58.422 "ddgst": ${ddgst:-false} 00:25:58.422 }, 00:25:58.422 "method": "bdev_nvme_attach_controller" 00:25:58.422 } 00:25:58.422 EOF 00:25:58.422 )") 00:25:58.422 15:06:29 -- nvmf/common.sh@542 -- # cat 00:25:58.422 15:06:29 -- target/dif.sh@72 -- # (( file++ )) 00:25:58.422 15:06:29 -- target/dif.sh@72 -- # (( file <= files )) 00:25:58.422 15:06:29 -- nvmf/common.sh@544 -- # jq . 00:25:58.422 15:06:29 -- nvmf/common.sh@545 -- # IFS=, 00:25:58.422 15:06:29 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:25:58.422 "params": { 00:25:58.422 "name": "Nvme0", 00:25:58.422 "trtype": "tcp", 00:25:58.422 "traddr": "10.0.0.2", 00:25:58.422 "adrfam": "ipv4", 00:25:58.422 "trsvcid": "4420", 00:25:58.422 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:25:58.422 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:25:58.422 "hdgst": false, 00:25:58.422 "ddgst": false 00:25:58.422 }, 00:25:58.422 "method": "bdev_nvme_attach_controller" 00:25:58.422 },{ 00:25:58.422 "params": { 00:25:58.422 "name": "Nvme1", 00:25:58.422 "trtype": "tcp", 00:25:58.422 "traddr": "10.0.0.2", 00:25:58.422 "adrfam": "ipv4", 00:25:58.422 "trsvcid": "4420", 00:25:58.422 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:25:58.422 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:25:58.422 "hdgst": false, 00:25:58.422 "ddgst": false 00:25:58.422 }, 00:25:58.422 "method": "bdev_nvme_attach_controller" 00:25:58.422 }' 00:25:58.422 15:06:29 -- common/autotest_common.sh@1334 -- # asan_lib= 00:25:58.422 15:06:29 -- common/autotest_common.sh@1335 -- # [[ -n '' ]] 00:25:58.422 15:06:29 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:25:58.422 15:06:29 -- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:25:58.422 15:06:29 -- common/autotest_common.sh@1334 -- # grep libclang_rt.asan 00:25:58.422 15:06:29 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:25:58.422 15:06:29 -- common/autotest_common.sh@1334 -- # asan_lib= 00:25:58.422 15:06:29 -- common/autotest_common.sh@1335 -- # [[ -n '' ]] 00:25:58.422 15:06:29 -- common/autotest_common.sh@1341 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:25:58.422 15:06:29 -- common/autotest_common.sh@1341 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:25:58.422 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:25:58.423 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:25:58.423 fio-3.35 00:25:58.423 Starting 2 threads 00:25:58.423 [2024-12-01 15:06:30.456234] rpc.c: 181:spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:25:58.423 [2024-12-01 15:06:30.456300] rpc.c: 90:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:26:08.390 00:26:08.390 filename0: (groupid=0, jobs=1): err= 0: pid=102437: Sun Dec 1 15:06:40 2024 00:26:08.391 read: IOPS=556, BW=2227KiB/s (2280kB/s)(21.8MiB/10001msec) 00:26:08.391 slat (nsec): min=5770, max=33954, avg=6965.52, stdev=1906.15 00:26:08.391 clat (usec): min=349, max=42502, avg=7163.64, stdev=15098.68 00:26:08.391 lat (usec): min=355, max=42510, avg=7170.60, stdev=15098.82 00:26:08.391 clat percentiles (usec): 00:26:08.391 | 1.00th=[ 355], 5.00th=[ 359], 10.00th=[ 363], 20.00th=[ 371], 00:26:08.391 | 30.00th=[ 375], 40.00th=[ 383], 50.00th=[ 388], 60.00th=[ 400], 00:26:08.391 | 70.00th=[ 424], 80.00th=[ 652], 90.00th=[40633], 95.00th=[41157], 00:26:08.391 | 99.00th=[41157], 99.50th=[41681], 99.90th=[42206], 99.95th=[42730], 00:26:08.391 | 99.99th=[42730] 00:26:08.391 bw ( KiB/s): min= 544, max= 3616, per=48.74%, avg=2181.05, stdev=753.89, samples=19 00:26:08.391 iops : min= 136, max= 904, avg=545.26, stdev=188.47, samples=19 00:26:08.391 lat (usec) : 500=77.68%, 750=4.72%, 1000=0.43% 00:26:08.391 lat (msec) : 2=0.43%, 4=0.07%, 50=16.67% 00:26:08.391 cpu : usr=95.88%, sys=3.61%, ctx=87, majf=0, minf=0 00:26:08.391 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:26:08.391 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:08.391 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:08.391 issued rwts: total=5568,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:08.391 latency : target=0, window=0, percentile=100.00%, depth=4 00:26:08.391 filename1: (groupid=0, jobs=1): err= 0: pid=102438: Sun Dec 1 15:06:40 2024 00:26:08.391 read: IOPS=562, BW=2249KiB/s (2303kB/s)(22.0MiB/10009msec) 00:26:08.391 slat (nsec): min=5469, max=33361, avg=6983.66, stdev=1913.72 00:26:08.391 clat (usec): min=350, max=42404, avg=7092.75, stdev=15032.99 00:26:08.391 lat (usec): min=356, max=42412, avg=7099.74, stdev=15033.07 00:26:08.391 clat percentiles (usec): 00:26:08.391 | 1.00th=[ 359], 5.00th=[ 363], 10.00th=[ 367], 20.00th=[ 371], 00:26:08.391 | 30.00th=[ 379], 40.00th=[ 383], 50.00th=[ 392], 60.00th=[ 400], 00:26:08.391 | 70.00th=[ 424], 80.00th=[ 644], 90.00th=[40633], 95.00th=[41157], 00:26:08.391 | 99.00th=[41157], 99.50th=[41681], 99.90th=[42206], 99.95th=[42206], 00:26:08.391 | 99.99th=[42206] 00:26:08.391 bw ( KiB/s): min= 640, max= 3200, per=50.26%, avg=2249.60, stdev=638.24, samples=20 00:26:08.391 iops : min= 160, max= 800, avg=562.40, stdev=159.56, samples=20 00:26:08.391 lat (usec) : 500=77.65%, 750=5.26%, 1000=0.09% 00:26:08.391 lat (msec) : 2=0.44%, 10=0.07%, 50=16.49% 00:26:08.391 cpu : usr=95.64%, sys=3.70%, ctx=20, majf=0, minf=9 00:26:08.391 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:26:08.391 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:08.391 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:08.391 issued rwts: total=5628,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:08.391 latency : target=0, window=0, percentile=100.00%, depth=4 00:26:08.391 00:26:08.391 Run status group 0 (all jobs): 00:26:08.391 READ: bw=4474KiB/s (4582kB/s), 2227KiB/s-2249KiB/s (2280kB/s-2303kB/s), io=43.7MiB (45.9MB), run=10001-10009msec 00:26:08.391 15:06:40 -- target/dif.sh@96 -- # destroy_subsystems 0 1 00:26:08.391 15:06:40 -- target/dif.sh@43 -- # local sub 00:26:08.391 15:06:40 -- target/dif.sh@45 -- # for sub in "$@" 00:26:08.391 15:06:40 -- target/dif.sh@46 -- # destroy_subsystem 0 00:26:08.391 15:06:40 -- target/dif.sh@36 -- # local sub_id=0 00:26:08.391 15:06:40 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:26:08.391 15:06:40 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:08.391 15:06:40 -- common/autotest_common.sh@10 -- # set +x 00:26:08.391 15:06:40 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:08.391 15:06:40 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:26:08.391 15:06:40 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:08.391 15:06:40 -- common/autotest_common.sh@10 -- # set +x 00:26:08.391 15:06:40 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:08.391 15:06:40 -- target/dif.sh@45 -- # for sub in "$@" 00:26:08.391 15:06:40 -- target/dif.sh@46 -- # destroy_subsystem 1 00:26:08.391 15:06:40 -- target/dif.sh@36 -- # local sub_id=1 00:26:08.391 15:06:40 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:26:08.391 15:06:40 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:08.391 15:06:40 -- common/autotest_common.sh@10 -- # set +x 00:26:08.391 15:06:40 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:08.391 15:06:40 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:26:08.391 15:06:40 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:08.391 15:06:40 -- common/autotest_common.sh@10 -- # set +x 00:26:08.391 15:06:40 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:08.391 00:26:08.391 real 0m11.270s 00:26:08.391 user 0m19.991s 00:26:08.391 sys 0m1.086s 00:26:08.391 15:06:40 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:26:08.391 ************************************ 00:26:08.391 15:06:40 -- common/autotest_common.sh@10 -- # set +x 00:26:08.391 END TEST fio_dif_1_multi_subsystems 00:26:08.391 ************************************ 00:26:08.391 15:06:40 -- target/dif.sh@143 -- # run_test fio_dif_rand_params fio_dif_rand_params 00:26:08.391 15:06:40 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:26:08.391 15:06:40 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:26:08.391 15:06:40 -- common/autotest_common.sh@10 -- # set +x 00:26:08.391 ************************************ 00:26:08.391 START TEST fio_dif_rand_params 00:26:08.391 ************************************ 00:26:08.391 15:06:40 -- common/autotest_common.sh@1114 -- # fio_dif_rand_params 00:26:08.391 15:06:40 -- target/dif.sh@100 -- # local NULL_DIF 00:26:08.391 15:06:40 -- target/dif.sh@101 -- # local bs numjobs runtime iodepth files 00:26:08.391 15:06:40 -- target/dif.sh@103 -- # NULL_DIF=3 00:26:08.391 15:06:40 -- target/dif.sh@103 -- # bs=128k 00:26:08.391 15:06:40 -- target/dif.sh@103 -- # numjobs=3 00:26:08.391 15:06:40 -- target/dif.sh@103 -- # iodepth=3 00:26:08.391 15:06:40 -- target/dif.sh@103 -- # runtime=5 00:26:08.391 15:06:40 -- target/dif.sh@105 -- # create_subsystems 0 00:26:08.391 15:06:40 -- target/dif.sh@28 -- # local sub 00:26:08.391 15:06:40 -- target/dif.sh@30 -- # for sub in "$@" 00:26:08.391 15:06:40 -- target/dif.sh@31 -- # create_subsystem 0 00:26:08.391 15:06:40 -- target/dif.sh@18 -- # local sub_id=0 00:26:08.391 15:06:40 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:26:08.391 15:06:40 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:08.391 15:06:40 -- common/autotest_common.sh@10 -- # set +x 00:26:08.391 bdev_null0 00:26:08.391 15:06:40 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:08.391 15:06:40 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:26:08.391 15:06:40 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:08.391 15:06:40 -- common/autotest_common.sh@10 -- # set +x 00:26:08.391 15:06:40 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:08.391 15:06:40 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:26:08.391 15:06:40 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:08.391 15:06:40 -- common/autotest_common.sh@10 -- # set +x 00:26:08.391 15:06:40 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:08.391 15:06:40 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:26:08.391 15:06:40 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:08.391 15:06:40 -- common/autotest_common.sh@10 -- # set +x 00:26:08.391 [2024-12-01 15:06:40.959067] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:08.391 15:06:40 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:08.391 15:06:40 -- target/dif.sh@106 -- # fio /dev/fd/62 00:26:08.391 15:06:40 -- target/dif.sh@106 -- # create_json_sub_conf 0 00:26:08.391 15:06:40 -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:26:08.391 15:06:40 -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:26:08.391 15:06:40 -- nvmf/common.sh@520 -- # config=() 00:26:08.391 15:06:40 -- nvmf/common.sh@520 -- # local subsystem config 00:26:08.391 15:06:40 -- common/autotest_common.sh@1345 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:26:08.391 15:06:40 -- common/autotest_common.sh@1326 -- # local fio_dir=/usr/src/fio 00:26:08.391 15:06:40 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:26:08.391 15:06:40 -- target/dif.sh@82 -- # gen_fio_conf 00:26:08.391 15:06:40 -- common/autotest_common.sh@1328 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:26:08.391 15:06:40 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:26:08.391 { 00:26:08.391 "params": { 00:26:08.391 "name": "Nvme$subsystem", 00:26:08.391 "trtype": "$TEST_TRANSPORT", 00:26:08.391 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:08.391 "adrfam": "ipv4", 00:26:08.391 "trsvcid": "$NVMF_PORT", 00:26:08.391 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:08.391 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:08.391 "hdgst": ${hdgst:-false}, 00:26:08.391 "ddgst": ${ddgst:-false} 00:26:08.391 }, 00:26:08.391 "method": "bdev_nvme_attach_controller" 00:26:08.391 } 00:26:08.391 EOF 00:26:08.391 )") 00:26:08.391 15:06:40 -- common/autotest_common.sh@1328 -- # local sanitizers 00:26:08.391 15:06:40 -- target/dif.sh@54 -- # local file 00:26:08.391 15:06:40 -- common/autotest_common.sh@1329 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:26:08.391 15:06:40 -- target/dif.sh@56 -- # cat 00:26:08.391 15:06:40 -- common/autotest_common.sh@1330 -- # shift 00:26:08.391 15:06:40 -- common/autotest_common.sh@1332 -- # local asan_lib= 00:26:08.391 15:06:40 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:26:08.391 15:06:40 -- nvmf/common.sh@542 -- # cat 00:26:08.391 15:06:40 -- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:26:08.391 15:06:40 -- target/dif.sh@72 -- # (( file = 1 )) 00:26:08.391 15:06:40 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:26:08.391 15:06:40 -- target/dif.sh@72 -- # (( file <= files )) 00:26:08.391 15:06:40 -- common/autotest_common.sh@1334 -- # grep libasan 00:26:08.391 15:06:40 -- nvmf/common.sh@544 -- # jq . 00:26:08.391 15:06:40 -- nvmf/common.sh@545 -- # IFS=, 00:26:08.391 15:06:40 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:26:08.391 "params": { 00:26:08.391 "name": "Nvme0", 00:26:08.391 "trtype": "tcp", 00:26:08.391 "traddr": "10.0.0.2", 00:26:08.391 "adrfam": "ipv4", 00:26:08.391 "trsvcid": "4420", 00:26:08.391 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:26:08.391 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:26:08.391 "hdgst": false, 00:26:08.391 "ddgst": false 00:26:08.391 }, 00:26:08.391 "method": "bdev_nvme_attach_controller" 00:26:08.391 }' 00:26:08.391 15:06:40 -- common/autotest_common.sh@1334 -- # asan_lib= 00:26:08.391 15:06:40 -- common/autotest_common.sh@1335 -- # [[ -n '' ]] 00:26:08.391 15:06:40 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:26:08.391 15:06:40 -- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:26:08.391 15:06:40 -- common/autotest_common.sh@1334 -- # grep libclang_rt.asan 00:26:08.391 15:06:40 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:26:08.391 15:06:41 -- common/autotest_common.sh@1334 -- # asan_lib= 00:26:08.391 15:06:41 -- common/autotest_common.sh@1335 -- # [[ -n '' ]] 00:26:08.391 15:06:41 -- common/autotest_common.sh@1341 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:26:08.391 15:06:41 -- common/autotest_common.sh@1341 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:26:08.391 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:26:08.391 ... 00:26:08.391 fio-3.35 00:26:08.391 Starting 3 threads 00:26:08.650 [2024-12-01 15:06:41.627582] rpc.c: 181:spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:26:08.650 [2024-12-01 15:06:41.628261] rpc.c: 90:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:26:13.921 00:26:13.921 filename0: (groupid=0, jobs=1): err= 0: pid=102588: Sun Dec 1 15:06:46 2024 00:26:13.921 read: IOPS=282, BW=35.3MiB/s (37.0MB/s)(177MiB/5002msec) 00:26:13.921 slat (nsec): min=5976, max=61787, avg=12062.05, stdev=5688.25 00:26:13.921 clat (usec): min=4383, max=52696, avg=10610.72, stdev=9448.51 00:26:13.921 lat (usec): min=4392, max=52718, avg=10622.79, stdev=9448.75 00:26:13.921 clat percentiles (usec): 00:26:13.921 | 1.00th=[ 5080], 5.00th=[ 5473], 10.00th=[ 5932], 20.00th=[ 6325], 00:26:13.921 | 30.00th=[ 6587], 40.00th=[ 6980], 50.00th=[ 8979], 60.00th=[ 9765], 00:26:13.921 | 70.00th=[10290], 80.00th=[10814], 90.00th=[11338], 95.00th=[46400], 00:26:13.921 | 99.00th=[50594], 99.50th=[51119], 99.90th=[52691], 99.95th=[52691], 00:26:13.921 | 99.99th=[52691] 00:26:13.921 bw ( KiB/s): min=21547, max=49408, per=32.06%, avg=35970.44, stdev=8849.25, samples=9 00:26:13.921 iops : min= 168, max= 386, avg=280.89, stdev=69.19, samples=9 00:26:13.921 lat (msec) : 10=62.89%, 20=31.59%, 50=4.11%, 100=1.42% 00:26:13.921 cpu : usr=93.94%, sys=4.46%, ctx=7, majf=0, minf=9 00:26:13.921 IO depths : 1=3.4%, 2=96.6%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:26:13.921 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:13.921 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:13.921 issued rwts: total=1412,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:13.921 latency : target=0, window=0, percentile=100.00%, depth=3 00:26:13.921 filename0: (groupid=0, jobs=1): err= 0: pid=102589: Sun Dec 1 15:06:46 2024 00:26:13.921 read: IOPS=355, BW=44.4MiB/s (46.5MB/s)(222MiB/5001msec) 00:26:13.921 slat (nsec): min=5804, max=56212, avg=10011.77, stdev=5798.38 00:26:13.921 clat (usec): min=3103, max=52178, avg=8422.17, stdev=4062.72 00:26:13.921 lat (usec): min=3109, max=52184, avg=8432.18, stdev=4063.13 00:26:13.921 clat percentiles (usec): 00:26:13.921 | 1.00th=[ 3458], 5.00th=[ 3556], 10.00th=[ 3589], 20.00th=[ 5538], 00:26:13.921 | 30.00th=[ 7111], 40.00th=[ 7504], 50.00th=[ 7898], 60.00th=[ 8717], 00:26:13.921 | 70.00th=[10814], 80.00th=[11338], 90.00th=[11863], 95.00th=[12125], 00:26:13.921 | 99.00th=[12780], 99.50th=[45876], 99.90th=[51119], 99.95th=[52167], 00:26:13.921 | 99.99th=[52167] 00:26:13.921 bw ( KiB/s): min=37707, max=58251, per=40.53%, avg=45478.00, stdev=7388.78, samples=9 00:26:13.921 iops : min= 294, max= 455, avg=355.22, stdev=57.78, samples=9 00:26:13.921 lat (msec) : 4=17.34%, 10=46.96%, 20=35.19%, 50=0.34%, 100=0.17% 00:26:13.921 cpu : usr=92.30%, sys=5.70%, ctx=4, majf=0, minf=9 00:26:13.921 IO depths : 1=31.9%, 2=68.1%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:26:13.921 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:13.921 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:13.921 issued rwts: total=1776,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:13.921 latency : target=0, window=0, percentile=100.00%, depth=3 00:26:13.921 filename0: (groupid=0, jobs=1): err= 0: pid=102590: Sun Dec 1 15:06:46 2024 00:26:13.921 read: IOPS=239, BW=29.9MiB/s (31.4MB/s)(150MiB/5002msec) 00:26:13.921 slat (nsec): min=5845, max=64398, avg=14300.72, stdev=7247.68 00:26:13.921 clat (usec): min=2727, max=51518, avg=12511.85, stdev=13042.02 00:26:13.921 lat (usec): min=2739, max=51528, avg=12526.15, stdev=13042.15 00:26:13.921 clat percentiles (usec): 00:26:13.921 | 1.00th=[ 3359], 5.00th=[ 5407], 10.00th=[ 6259], 20.00th=[ 6652], 00:26:13.921 | 30.00th=[ 7570], 40.00th=[ 8094], 50.00th=[ 8455], 60.00th=[ 8848], 00:26:13.921 | 70.00th=[ 9110], 80.00th=[ 9372], 90.00th=[46924], 95.00th=[49021], 00:26:13.921 | 99.00th=[50594], 99.50th=[50594], 99.90th=[51119], 99.95th=[51643], 00:26:13.921 | 99.99th=[51643] 00:26:13.921 bw ( KiB/s): min=21760, max=38912, per=27.29%, avg=30617.33, stdev=6886.43, samples=9 00:26:13.921 iops : min= 170, max= 304, avg=239.11, stdev=53.69, samples=9 00:26:13.921 lat (msec) : 4=3.68%, 10=84.04%, 20=1.00%, 50=9.02%, 100=2.26% 00:26:13.921 cpu : usr=94.26%, sys=4.20%, ctx=36, majf=0, minf=0 00:26:13.921 IO depths : 1=6.9%, 2=93.1%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:26:13.921 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:13.921 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:13.921 issued rwts: total=1197,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:13.921 latency : target=0, window=0, percentile=100.00%, depth=3 00:26:13.921 00:26:13.921 Run status group 0 (all jobs): 00:26:13.921 READ: bw=110MiB/s (115MB/s), 29.9MiB/s-44.4MiB/s (31.4MB/s-46.5MB/s), io=548MiB (575MB), run=5001-5002msec 00:26:13.921 15:06:47 -- target/dif.sh@107 -- # destroy_subsystems 0 00:26:13.921 15:06:47 -- target/dif.sh@43 -- # local sub 00:26:13.921 15:06:47 -- target/dif.sh@45 -- # for sub in "$@" 00:26:13.921 15:06:47 -- target/dif.sh@46 -- # destroy_subsystem 0 00:26:13.921 15:06:47 -- target/dif.sh@36 -- # local sub_id=0 00:26:14.181 15:06:47 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:26:14.181 15:06:47 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:14.181 15:06:47 -- common/autotest_common.sh@10 -- # set +x 00:26:14.181 15:06:47 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:14.181 15:06:47 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:26:14.181 15:06:47 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:14.181 15:06:47 -- common/autotest_common.sh@10 -- # set +x 00:26:14.181 15:06:47 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:14.181 15:06:47 -- target/dif.sh@109 -- # NULL_DIF=2 00:26:14.181 15:06:47 -- target/dif.sh@109 -- # bs=4k 00:26:14.181 15:06:47 -- target/dif.sh@109 -- # numjobs=8 00:26:14.181 15:06:47 -- target/dif.sh@109 -- # iodepth=16 00:26:14.181 15:06:47 -- target/dif.sh@109 -- # runtime= 00:26:14.181 15:06:47 -- target/dif.sh@109 -- # files=2 00:26:14.181 15:06:47 -- target/dif.sh@111 -- # create_subsystems 0 1 2 00:26:14.181 15:06:47 -- target/dif.sh@28 -- # local sub 00:26:14.181 15:06:47 -- target/dif.sh@30 -- # for sub in "$@" 00:26:14.181 15:06:47 -- target/dif.sh@31 -- # create_subsystem 0 00:26:14.181 15:06:47 -- target/dif.sh@18 -- # local sub_id=0 00:26:14.181 15:06:47 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 2 00:26:14.181 15:06:47 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:14.181 15:06:47 -- common/autotest_common.sh@10 -- # set +x 00:26:14.181 bdev_null0 00:26:14.181 15:06:47 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:14.181 15:06:47 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:26:14.181 15:06:47 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:14.181 15:06:47 -- common/autotest_common.sh@10 -- # set +x 00:26:14.181 15:06:47 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:14.181 15:06:47 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:26:14.181 15:06:47 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:14.181 15:06:47 -- common/autotest_common.sh@10 -- # set +x 00:26:14.181 15:06:47 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:14.181 15:06:47 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:26:14.181 15:06:47 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:14.181 15:06:47 -- common/autotest_common.sh@10 -- # set +x 00:26:14.181 [2024-12-01 15:06:47.085382] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:14.181 15:06:47 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:14.181 15:06:47 -- target/dif.sh@30 -- # for sub in "$@" 00:26:14.181 15:06:47 -- target/dif.sh@31 -- # create_subsystem 1 00:26:14.181 15:06:47 -- target/dif.sh@18 -- # local sub_id=1 00:26:14.181 15:06:47 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 2 00:26:14.181 15:06:47 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:14.181 15:06:47 -- common/autotest_common.sh@10 -- # set +x 00:26:14.181 bdev_null1 00:26:14.181 15:06:47 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:14.181 15:06:47 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:26:14.181 15:06:47 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:14.181 15:06:47 -- common/autotest_common.sh@10 -- # set +x 00:26:14.181 15:06:47 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:14.181 15:06:47 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:26:14.181 15:06:47 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:14.181 15:06:47 -- common/autotest_common.sh@10 -- # set +x 00:26:14.181 15:06:47 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:14.181 15:06:47 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:14.181 15:06:47 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:14.181 15:06:47 -- common/autotest_common.sh@10 -- # set +x 00:26:14.181 15:06:47 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:14.181 15:06:47 -- target/dif.sh@30 -- # for sub in "$@" 00:26:14.181 15:06:47 -- target/dif.sh@31 -- # create_subsystem 2 00:26:14.181 15:06:47 -- target/dif.sh@18 -- # local sub_id=2 00:26:14.181 15:06:47 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null2 64 512 --md-size 16 --dif-type 2 00:26:14.181 15:06:47 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:14.181 15:06:47 -- common/autotest_common.sh@10 -- # set +x 00:26:14.181 bdev_null2 00:26:14.181 15:06:47 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:14.181 15:06:47 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 --serial-number 53313233-2 --allow-any-host 00:26:14.181 15:06:47 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:14.181 15:06:47 -- common/autotest_common.sh@10 -- # set +x 00:26:14.181 15:06:47 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:14.181 15:06:47 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 bdev_null2 00:26:14.181 15:06:47 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:14.181 15:06:47 -- common/autotest_common.sh@10 -- # set +x 00:26:14.181 15:06:47 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:14.181 15:06:47 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:26:14.181 15:06:47 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:14.181 15:06:47 -- common/autotest_common.sh@10 -- # set +x 00:26:14.181 15:06:47 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:14.181 15:06:47 -- target/dif.sh@112 -- # fio /dev/fd/62 00:26:14.181 15:06:47 -- target/dif.sh@112 -- # create_json_sub_conf 0 1 2 00:26:14.181 15:06:47 -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 2 00:26:14.181 15:06:47 -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:26:14.181 15:06:47 -- nvmf/common.sh@520 -- # config=() 00:26:14.181 15:06:47 -- nvmf/common.sh@520 -- # local subsystem config 00:26:14.181 15:06:47 -- common/autotest_common.sh@1345 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:26:14.181 15:06:47 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:26:14.181 15:06:47 -- target/dif.sh@82 -- # gen_fio_conf 00:26:14.181 15:06:47 -- common/autotest_common.sh@1326 -- # local fio_dir=/usr/src/fio 00:26:14.181 15:06:47 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:26:14.181 { 00:26:14.181 "params": { 00:26:14.181 "name": "Nvme$subsystem", 00:26:14.181 "trtype": "$TEST_TRANSPORT", 00:26:14.181 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:14.181 "adrfam": "ipv4", 00:26:14.181 "trsvcid": "$NVMF_PORT", 00:26:14.181 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:14.181 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:14.181 "hdgst": ${hdgst:-false}, 00:26:14.181 "ddgst": ${ddgst:-false} 00:26:14.181 }, 00:26:14.181 "method": "bdev_nvme_attach_controller" 00:26:14.181 } 00:26:14.181 EOF 00:26:14.181 )") 00:26:14.181 15:06:47 -- target/dif.sh@54 -- # local file 00:26:14.181 15:06:47 -- common/autotest_common.sh@1328 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:26:14.181 15:06:47 -- target/dif.sh@56 -- # cat 00:26:14.181 15:06:47 -- common/autotest_common.sh@1328 -- # local sanitizers 00:26:14.181 15:06:47 -- common/autotest_common.sh@1329 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:26:14.181 15:06:47 -- common/autotest_common.sh@1330 -- # shift 00:26:14.181 15:06:47 -- common/autotest_common.sh@1332 -- # local asan_lib= 00:26:14.181 15:06:47 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:26:14.181 15:06:47 -- nvmf/common.sh@542 -- # cat 00:26:14.181 15:06:47 -- target/dif.sh@72 -- # (( file = 1 )) 00:26:14.181 15:06:47 -- target/dif.sh@72 -- # (( file <= files )) 00:26:14.181 15:06:47 -- common/autotest_common.sh@1334 -- # grep libasan 00:26:14.181 15:06:47 -- target/dif.sh@73 -- # cat 00:26:14.181 15:06:47 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:26:14.181 15:06:47 -- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:26:14.181 15:06:47 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:26:14.181 15:06:47 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:26:14.181 { 00:26:14.181 "params": { 00:26:14.181 "name": "Nvme$subsystem", 00:26:14.181 "trtype": "$TEST_TRANSPORT", 00:26:14.181 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:14.181 "adrfam": "ipv4", 00:26:14.181 "trsvcid": "$NVMF_PORT", 00:26:14.181 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:14.181 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:14.181 "hdgst": ${hdgst:-false}, 00:26:14.181 "ddgst": ${ddgst:-false} 00:26:14.181 }, 00:26:14.181 "method": "bdev_nvme_attach_controller" 00:26:14.181 } 00:26:14.181 EOF 00:26:14.181 )") 00:26:14.181 15:06:47 -- target/dif.sh@72 -- # (( file++ )) 00:26:14.181 15:06:47 -- target/dif.sh@72 -- # (( file <= files )) 00:26:14.181 15:06:47 -- nvmf/common.sh@542 -- # cat 00:26:14.181 15:06:47 -- target/dif.sh@73 -- # cat 00:26:14.181 15:06:47 -- target/dif.sh@72 -- # (( file++ )) 00:26:14.181 15:06:47 -- target/dif.sh@72 -- # (( file <= files )) 00:26:14.181 15:06:47 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:26:14.181 15:06:47 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:26:14.181 { 00:26:14.181 "params": { 00:26:14.181 "name": "Nvme$subsystem", 00:26:14.181 "trtype": "$TEST_TRANSPORT", 00:26:14.181 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:14.181 "adrfam": "ipv4", 00:26:14.181 "trsvcid": "$NVMF_PORT", 00:26:14.181 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:14.181 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:14.181 "hdgst": ${hdgst:-false}, 00:26:14.181 "ddgst": ${ddgst:-false} 00:26:14.181 }, 00:26:14.182 "method": "bdev_nvme_attach_controller" 00:26:14.182 } 00:26:14.182 EOF 00:26:14.182 )") 00:26:14.182 15:06:47 -- nvmf/common.sh@542 -- # cat 00:26:14.182 15:06:47 -- nvmf/common.sh@544 -- # jq . 00:26:14.182 15:06:47 -- nvmf/common.sh@545 -- # IFS=, 00:26:14.182 15:06:47 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:26:14.182 "params": { 00:26:14.182 "name": "Nvme0", 00:26:14.182 "trtype": "tcp", 00:26:14.182 "traddr": "10.0.0.2", 00:26:14.182 "adrfam": "ipv4", 00:26:14.182 "trsvcid": "4420", 00:26:14.182 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:26:14.182 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:26:14.182 "hdgst": false, 00:26:14.182 "ddgst": false 00:26:14.182 }, 00:26:14.182 "method": "bdev_nvme_attach_controller" 00:26:14.182 },{ 00:26:14.182 "params": { 00:26:14.182 "name": "Nvme1", 00:26:14.182 "trtype": "tcp", 00:26:14.182 "traddr": "10.0.0.2", 00:26:14.182 "adrfam": "ipv4", 00:26:14.182 "trsvcid": "4420", 00:26:14.182 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:26:14.182 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:26:14.182 "hdgst": false, 00:26:14.182 "ddgst": false 00:26:14.182 }, 00:26:14.182 "method": "bdev_nvme_attach_controller" 00:26:14.182 },{ 00:26:14.182 "params": { 00:26:14.182 "name": "Nvme2", 00:26:14.182 "trtype": "tcp", 00:26:14.182 "traddr": "10.0.0.2", 00:26:14.182 "adrfam": "ipv4", 00:26:14.182 "trsvcid": "4420", 00:26:14.182 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:26:14.182 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:26:14.182 "hdgst": false, 00:26:14.182 "ddgst": false 00:26:14.182 }, 00:26:14.182 "method": "bdev_nvme_attach_controller" 00:26:14.182 }' 00:26:14.182 15:06:47 -- common/autotest_common.sh@1334 -- # asan_lib= 00:26:14.182 15:06:47 -- common/autotest_common.sh@1335 -- # [[ -n '' ]] 00:26:14.182 15:06:47 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:26:14.182 15:06:47 -- common/autotest_common.sh@1334 -- # grep libclang_rt.asan 00:26:14.182 15:06:47 -- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:26:14.182 15:06:47 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:26:14.182 15:06:47 -- common/autotest_common.sh@1334 -- # asan_lib= 00:26:14.182 15:06:47 -- common/autotest_common.sh@1335 -- # [[ -n '' ]] 00:26:14.182 15:06:47 -- common/autotest_common.sh@1341 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:26:14.182 15:06:47 -- common/autotest_common.sh@1341 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:26:14.441 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:26:14.441 ... 00:26:14.441 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:26:14.441 ... 00:26:14.441 filename2: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:26:14.441 ... 00:26:14.441 fio-3.35 00:26:14.441 Starting 24 threads 00:26:15.008 [2024-12-01 15:06:47.997371] rpc.c: 181:spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:26:15.008 [2024-12-01 15:06:47.997610] rpc.c: 90:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:26:27.204 00:26:27.204 filename0: (groupid=0, jobs=1): err= 0: pid=102697: Sun Dec 1 15:06:58 2024 00:26:27.204 read: IOPS=295, BW=1181KiB/s (1209kB/s)(11.6MiB/10028msec) 00:26:27.204 slat (usec): min=5, max=8021, avg=15.91, stdev=164.80 00:26:27.204 clat (msec): min=4, max=138, avg=54.06, stdev=18.97 00:26:27.204 lat (msec): min=4, max=138, avg=54.08, stdev=18.97 00:26:27.204 clat percentiles (msec): 00:26:27.204 | 1.00th=[ 7], 5.00th=[ 31], 10.00th=[ 35], 20.00th=[ 39], 00:26:27.204 | 30.00th=[ 44], 40.00th=[ 48], 50.00th=[ 51], 60.00th=[ 58], 00:26:27.204 | 70.00th=[ 61], 80.00th=[ 70], 90.00th=[ 83], 95.00th=[ 87], 00:26:27.204 | 99.00th=[ 107], 99.50th=[ 115], 99.90th=[ 140], 99.95th=[ 140], 00:26:27.204 | 99.99th=[ 140] 00:26:27.204 bw ( KiB/s): min= 784, max= 1805, per=4.59%, avg=1176.40, stdev=264.57, samples=20 00:26:27.204 iops : min= 196, max= 451, avg=294.05, stdev=66.13, samples=20 00:26:27.204 lat (msec) : 10=1.08%, 20=1.62%, 50=46.08%, 100=50.17%, 250=1.05% 00:26:27.204 cpu : usr=39.30%, sys=0.48%, ctx=1051, majf=0, minf=9 00:26:27.204 IO depths : 1=1.3%, 2=2.9%, 4=9.8%, 8=73.8%, 16=12.2%, 32=0.0%, >=64=0.0% 00:26:27.204 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:27.204 complete : 0=0.0%, 4=90.1%, 8=5.4%, 16=4.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:27.204 issued rwts: total=2960,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:27.204 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:27.204 filename0: (groupid=0, jobs=1): err= 0: pid=102698: Sun Dec 1 15:06:58 2024 00:26:27.204 read: IOPS=256, BW=1028KiB/s (1052kB/s)(10.0MiB/10006msec) 00:26:27.204 slat (usec): min=4, max=9029, avg=26.75, stdev=335.88 00:26:27.204 clat (msec): min=18, max=137, avg=62.11, stdev=17.76 00:26:27.204 lat (msec): min=18, max=137, avg=62.14, stdev=17.77 00:26:27.204 clat percentiles (msec): 00:26:27.204 | 1.00th=[ 28], 5.00th=[ 35], 10.00th=[ 42], 20.00th=[ 48], 00:26:27.204 | 30.00th=[ 54], 40.00th=[ 58], 50.00th=[ 61], 60.00th=[ 65], 00:26:27.204 | 70.00th=[ 69], 80.00th=[ 78], 90.00th=[ 84], 95.00th=[ 92], 00:26:27.204 | 99.00th=[ 116], 99.50th=[ 123], 99.90th=[ 138], 99.95th=[ 138], 00:26:27.204 | 99.99th=[ 138] 00:26:27.204 bw ( KiB/s): min= 768, max= 1280, per=3.95%, avg=1010.68, stdev=148.73, samples=19 00:26:27.204 iops : min= 192, max= 320, avg=252.63, stdev=37.22, samples=19 00:26:27.204 lat (msec) : 20=0.39%, 50=24.08%, 100=72.62%, 250=2.92% 00:26:27.204 cpu : usr=39.33%, sys=0.53%, ctx=1379, majf=0, minf=9 00:26:27.204 IO depths : 1=1.8%, 2=4.3%, 4=12.6%, 8=69.5%, 16=11.8%, 32=0.0%, >=64=0.0% 00:26:27.204 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:27.204 complete : 0=0.0%, 4=91.0%, 8=4.3%, 16=4.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:27.204 issued rwts: total=2571,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:27.204 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:27.204 filename0: (groupid=0, jobs=1): err= 0: pid=102699: Sun Dec 1 15:06:58 2024 00:26:27.204 read: IOPS=241, BW=968KiB/s (991kB/s)(9684KiB/10006msec) 00:26:27.204 slat (usec): min=4, max=8027, avg=15.39, stdev=163.06 00:26:27.204 clat (msec): min=3, max=131, avg=66.03, stdev=20.14 00:26:27.204 lat (msec): min=3, max=131, avg=66.04, stdev=20.14 00:26:27.204 clat percentiles (msec): 00:26:27.204 | 1.00th=[ 24], 5.00th=[ 36], 10.00th=[ 47], 20.00th=[ 50], 00:26:27.204 | 30.00th=[ 58], 40.00th=[ 60], 50.00th=[ 62], 60.00th=[ 69], 00:26:27.204 | 70.00th=[ 72], 80.00th=[ 82], 90.00th=[ 93], 95.00th=[ 106], 00:26:27.204 | 99.00th=[ 126], 99.50th=[ 131], 99.90th=[ 132], 99.95th=[ 132], 00:26:27.204 | 99.99th=[ 132] 00:26:27.204 bw ( KiB/s): min= 680, max= 1296, per=3.71%, avg=949.53, stdev=155.55, samples=19 00:26:27.204 iops : min= 170, max= 324, avg=237.37, stdev=38.90, samples=19 00:26:27.204 lat (msec) : 4=0.25%, 10=0.41%, 50=20.45%, 100=72.00%, 250=6.90% 00:26:27.204 cpu : usr=32.58%, sys=0.55%, ctx=943, majf=0, minf=9 00:26:27.204 IO depths : 1=1.7%, 2=3.8%, 4=12.3%, 8=70.6%, 16=11.7%, 32=0.0%, >=64=0.0% 00:26:27.204 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:27.204 complete : 0=0.0%, 4=90.4%, 8=4.8%, 16=4.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:27.204 issued rwts: total=2421,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:27.204 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:27.204 filename0: (groupid=0, jobs=1): err= 0: pid=102700: Sun Dec 1 15:06:58 2024 00:26:27.204 read: IOPS=259, BW=1039KiB/s (1064kB/s)(10.2MiB/10014msec) 00:26:27.204 slat (usec): min=4, max=8025, avg=20.00, stdev=235.80 00:26:27.204 clat (msec): min=23, max=127, avg=61.44, stdev=17.73 00:26:27.204 lat (msec): min=23, max=127, avg=61.46, stdev=17.73 00:26:27.204 clat percentiles (msec): 00:26:27.204 | 1.00th=[ 28], 5.00th=[ 35], 10.00th=[ 36], 20.00th=[ 48], 00:26:27.204 | 30.00th=[ 52], 40.00th=[ 58], 50.00th=[ 61], 60.00th=[ 63], 00:26:27.204 | 70.00th=[ 71], 80.00th=[ 77], 90.00th=[ 85], 95.00th=[ 91], 00:26:27.204 | 99.00th=[ 113], 99.50th=[ 122], 99.90th=[ 128], 99.95th=[ 128], 00:26:27.204 | 99.99th=[ 128] 00:26:27.204 bw ( KiB/s): min= 768, max= 1224, per=4.00%, avg=1023.74, stdev=156.49, samples=19 00:26:27.204 iops : min= 192, max= 306, avg=255.89, stdev=39.14, samples=19 00:26:27.204 lat (msec) : 50=26.75%, 100=71.14%, 250=2.11% 00:26:27.204 cpu : usr=33.37%, sys=0.44%, ctx=970, majf=0, minf=9 00:26:27.204 IO depths : 1=0.8%, 2=2.3%, 4=10.5%, 8=73.6%, 16=12.8%, 32=0.0%, >=64=0.0% 00:26:27.204 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:27.204 complete : 0=0.0%, 4=90.3%, 8=5.2%, 16=4.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:27.204 issued rwts: total=2602,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:27.204 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:27.204 filename0: (groupid=0, jobs=1): err= 0: pid=102701: Sun Dec 1 15:06:58 2024 00:26:27.204 read: IOPS=279, BW=1118KiB/s (1144kB/s)(10.9MiB/10021msec) 00:26:27.204 slat (nsec): min=5149, max=44703, avg=11393.83, stdev=6778.71 00:26:27.204 clat (msec): min=21, max=125, avg=57.17, stdev=18.47 00:26:27.204 lat (msec): min=21, max=125, avg=57.18, stdev=18.47 00:26:27.204 clat percentiles (msec): 00:26:27.204 | 1.00th=[ 25], 5.00th=[ 34], 10.00th=[ 36], 20.00th=[ 40], 00:26:27.204 | 30.00th=[ 47], 40.00th=[ 51], 50.00th=[ 57], 60.00th=[ 61], 00:26:27.204 | 70.00th=[ 64], 80.00th=[ 72], 90.00th=[ 84], 95.00th=[ 93], 00:26:27.204 | 99.00th=[ 120], 99.50th=[ 120], 99.90th=[ 127], 99.95th=[ 127], 00:26:27.204 | 99.99th=[ 127] 00:26:27.204 bw ( KiB/s): min= 816, max= 1472, per=4.37%, avg=1118.21, stdev=208.18, samples=19 00:26:27.204 iops : min= 204, max= 368, avg=279.53, stdev=52.08, samples=19 00:26:27.204 lat (msec) : 50=39.61%, 100=57.79%, 250=2.61% 00:26:27.204 cpu : usr=38.19%, sys=0.52%, ctx=1055, majf=0, minf=9 00:26:27.204 IO depths : 1=1.2%, 2=2.7%, 4=9.5%, 8=74.2%, 16=12.4%, 32=0.0%, >=64=0.0% 00:26:27.204 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:27.204 complete : 0=0.0%, 4=90.0%, 8=5.5%, 16=4.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:27.204 issued rwts: total=2800,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:27.204 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:27.204 filename0: (groupid=0, jobs=1): err= 0: pid=102702: Sun Dec 1 15:06:58 2024 00:26:27.204 read: IOPS=247, BW=991KiB/s (1014kB/s)(9912KiB/10005msec) 00:26:27.204 slat (usec): min=4, max=9047, avg=24.76, stdev=302.21 00:26:27.204 clat (msec): min=4, max=147, avg=64.41, stdev=18.22 00:26:27.204 lat (msec): min=4, max=147, avg=64.43, stdev=18.22 00:26:27.204 clat percentiles (msec): 00:26:27.204 | 1.00th=[ 33], 5.00th=[ 39], 10.00th=[ 46], 20.00th=[ 53], 00:26:27.204 | 30.00th=[ 56], 40.00th=[ 58], 50.00th=[ 61], 60.00th=[ 64], 00:26:27.204 | 70.00th=[ 72], 80.00th=[ 81], 90.00th=[ 88], 95.00th=[ 96], 00:26:27.204 | 99.00th=[ 121], 99.50th=[ 124], 99.90th=[ 125], 99.95th=[ 125], 00:26:27.204 | 99.99th=[ 148] 00:26:27.204 bw ( KiB/s): min= 640, max= 1328, per=3.82%, avg=978.47, stdev=155.71, samples=19 00:26:27.204 iops : min= 160, max= 332, avg=244.58, stdev=38.92, samples=19 00:26:27.204 lat (msec) : 10=0.85%, 50=15.58%, 100=79.58%, 250=4.00% 00:26:27.204 cpu : usr=44.91%, sys=0.64%, ctx=1537, majf=0, minf=9 00:26:27.204 IO depths : 1=2.6%, 2=5.9%, 4=16.3%, 8=64.5%, 16=10.7%, 32=0.0%, >=64=0.0% 00:26:27.204 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:27.204 complete : 0=0.0%, 4=91.8%, 8=3.3%, 16=4.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:27.204 issued rwts: total=2478,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:27.204 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:27.204 filename0: (groupid=0, jobs=1): err= 0: pid=102703: Sun Dec 1 15:06:58 2024 00:26:27.204 read: IOPS=315, BW=1263KiB/s (1293kB/s)(12.4MiB/10042msec) 00:26:27.204 slat (usec): min=4, max=8025, avg=16.31, stdev=179.44 00:26:27.204 clat (msec): min=18, max=119, avg=50.49, stdev=16.28 00:26:27.204 lat (msec): min=18, max=119, avg=50.51, stdev=16.28 00:26:27.204 clat percentiles (msec): 00:26:27.204 | 1.00th=[ 24], 5.00th=[ 32], 10.00th=[ 34], 20.00th=[ 37], 00:26:27.204 | 30.00th=[ 41], 40.00th=[ 45], 50.00th=[ 48], 60.00th=[ 52], 00:26:27.204 | 70.00th=[ 58], 80.00th=[ 63], 90.00th=[ 72], 95.00th=[ 84], 00:26:27.204 | 99.00th=[ 103], 99.50th=[ 107], 99.90th=[ 121], 99.95th=[ 121], 00:26:27.204 | 99.99th=[ 121] 00:26:27.204 bw ( KiB/s): min= 816, max= 1552, per=4.93%, avg=1261.70, stdev=218.74, samples=20 00:26:27.204 iops : min= 204, max= 388, avg=315.40, stdev=54.66, samples=20 00:26:27.204 lat (msec) : 20=0.63%, 50=57.48%, 100=40.63%, 250=1.26% 00:26:27.204 cpu : usr=40.36%, sys=0.48%, ctx=1243, majf=0, minf=9 00:26:27.204 IO depths : 1=0.3%, 2=0.8%, 4=6.3%, 8=79.2%, 16=13.5%, 32=0.0%, >=64=0.0% 00:26:27.204 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:27.204 complete : 0=0.0%, 4=88.9%, 8=6.8%, 16=4.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:27.204 issued rwts: total=3170,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:27.204 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:27.204 filename0: (groupid=0, jobs=1): err= 0: pid=102704: Sun Dec 1 15:06:58 2024 00:26:27.204 read: IOPS=239, BW=960KiB/s (983kB/s)(9596KiB/10001msec) 00:26:27.204 slat (usec): min=3, max=4040, avg=18.86, stdev=144.32 00:26:27.204 clat (msec): min=26, max=144, avg=66.53, stdev=20.02 00:26:27.204 lat (msec): min=26, max=144, avg=66.55, stdev=20.02 00:26:27.204 clat percentiles (msec): 00:26:27.204 | 1.00th=[ 33], 5.00th=[ 39], 10.00th=[ 46], 20.00th=[ 52], 00:26:27.204 | 30.00th=[ 57], 40.00th=[ 60], 50.00th=[ 61], 60.00th=[ 67], 00:26:27.205 | 70.00th=[ 72], 80.00th=[ 82], 90.00th=[ 96], 95.00th=[ 112], 00:26:27.205 | 99.00th=[ 125], 99.50th=[ 125], 99.90th=[ 146], 99.95th=[ 146], 00:26:27.205 | 99.99th=[ 146] 00:26:27.205 bw ( KiB/s): min= 592, max= 1152, per=3.67%, avg=940.21, stdev=149.08, samples=19 00:26:27.205 iops : min= 148, max= 288, avg=235.05, stdev=37.27, samples=19 00:26:27.205 lat (msec) : 50=18.22%, 100=74.07%, 250=7.71% 00:26:27.205 cpu : usr=39.78%, sys=0.57%, ctx=1036, majf=0, minf=9 00:26:27.205 IO depths : 1=2.5%, 2=5.5%, 4=14.4%, 8=66.4%, 16=11.2%, 32=0.0%, >=64=0.0% 00:26:27.205 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:27.205 complete : 0=0.0%, 4=91.5%, 8=4.0%, 16=4.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:27.205 issued rwts: total=2399,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:27.205 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:27.205 filename1: (groupid=0, jobs=1): err= 0: pid=102705: Sun Dec 1 15:06:58 2024 00:26:27.205 read: IOPS=279, BW=1119KiB/s (1146kB/s)(11.0MiB/10026msec) 00:26:27.205 slat (usec): min=6, max=8017, avg=16.51, stdev=169.23 00:26:27.205 clat (msec): min=4, max=121, avg=57.02, stdev=18.87 00:26:27.205 lat (msec): min=4, max=121, avg=57.04, stdev=18.87 00:26:27.205 clat percentiles (msec): 00:26:27.205 | 1.00th=[ 7], 5.00th=[ 30], 10.00th=[ 36], 20.00th=[ 44], 00:26:27.205 | 30.00th=[ 48], 40.00th=[ 54], 50.00th=[ 58], 60.00th=[ 61], 00:26:27.205 | 70.00th=[ 63], 80.00th=[ 72], 90.00th=[ 82], 95.00th=[ 90], 00:26:27.205 | 99.00th=[ 108], 99.50th=[ 116], 99.90th=[ 122], 99.95th=[ 122], 00:26:27.205 | 99.99th=[ 122] 00:26:27.205 bw ( KiB/s): min= 816, max= 1781, per=4.37%, avg=1118.05, stdev=229.32, samples=20 00:26:27.205 iops : min= 204, max= 445, avg=279.45, stdev=57.30, samples=20 00:26:27.205 lat (msec) : 10=1.71%, 20=1.71%, 50=31.60%, 100=63.41%, 250=1.57% 00:26:27.205 cpu : usr=41.33%, sys=0.50%, ctx=1127, majf=0, minf=9 00:26:27.205 IO depths : 1=1.3%, 2=3.0%, 4=9.9%, 8=73.5%, 16=12.3%, 32=0.0%, >=64=0.0% 00:26:27.205 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:27.205 complete : 0=0.0%, 4=90.2%, 8=5.2%, 16=4.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:27.205 issued rwts: total=2804,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:27.205 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:27.205 filename1: (groupid=0, jobs=1): err= 0: pid=102706: Sun Dec 1 15:06:58 2024 00:26:27.205 read: IOPS=269, BW=1080KiB/s (1105kB/s)(10.6MiB/10034msec) 00:26:27.205 slat (usec): min=4, max=4031, avg=14.17, stdev=98.04 00:26:27.205 clat (msec): min=16, max=142, avg=59.16, stdev=17.95 00:26:27.205 lat (msec): min=16, max=142, avg=59.17, stdev=17.95 00:26:27.205 clat percentiles (msec): 00:26:27.205 | 1.00th=[ 27], 5.00th=[ 34], 10.00th=[ 37], 20.00th=[ 46], 00:26:27.205 | 30.00th=[ 49], 40.00th=[ 54], 50.00th=[ 59], 60.00th=[ 62], 00:26:27.205 | 70.00th=[ 65], 80.00th=[ 72], 90.00th=[ 83], 95.00th=[ 88], 00:26:27.205 | 99.00th=[ 112], 99.50th=[ 126], 99.90th=[ 144], 99.95th=[ 144], 00:26:27.205 | 99.99th=[ 144] 00:26:27.205 bw ( KiB/s): min= 640, max= 1456, per=4.20%, avg=1076.90, stdev=186.94, samples=20 00:26:27.205 iops : min= 160, max= 364, avg=269.20, stdev=46.74, samples=20 00:26:27.205 lat (msec) : 20=0.26%, 50=33.53%, 100=63.40%, 250=2.81% 00:26:27.205 cpu : usr=41.00%, sys=0.39%, ctx=1114, majf=0, minf=9 00:26:27.205 IO depths : 1=1.8%, 2=4.1%, 4=13.3%, 8=69.6%, 16=11.2%, 32=0.0%, >=64=0.0% 00:26:27.205 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:27.205 complete : 0=0.0%, 4=90.8%, 8=4.1%, 16=5.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:27.205 issued rwts: total=2708,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:27.205 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:27.205 filename1: (groupid=0, jobs=1): err= 0: pid=102707: Sun Dec 1 15:06:58 2024 00:26:27.205 read: IOPS=296, BW=1188KiB/s (1216kB/s)(11.6MiB/10038msec) 00:26:27.205 slat (nsec): min=6580, max=81121, avg=11516.61, stdev=6799.73 00:26:27.205 clat (msec): min=2, max=139, avg=53.77, stdev=19.65 00:26:27.205 lat (msec): min=2, max=139, avg=53.78, stdev=19.65 00:26:27.205 clat percentiles (msec): 00:26:27.205 | 1.00th=[ 4], 5.00th=[ 14], 10.00th=[ 34], 20.00th=[ 38], 00:26:27.205 | 30.00th=[ 46], 40.00th=[ 50], 50.00th=[ 57], 60.00th=[ 59], 00:26:27.205 | 70.00th=[ 61], 80.00th=[ 69], 90.00th=[ 78], 95.00th=[ 84], 00:26:27.205 | 99.00th=[ 109], 99.50th=[ 120], 99.90th=[ 140], 99.95th=[ 140], 00:26:27.205 | 99.99th=[ 140] 00:26:27.205 bw ( KiB/s): min= 640, max= 2123, per=4.63%, avg=1185.55, stdev=289.21, samples=20 00:26:27.205 iops : min= 160, max= 530, avg=296.30, stdev=72.19, samples=20 00:26:27.205 lat (msec) : 4=1.07%, 10=3.22%, 20=1.07%, 50=37.84%, 100=55.22% 00:26:27.205 lat (msec) : 250=1.58% 00:26:27.205 cpu : usr=33.15%, sys=0.43%, ctx=943, majf=0, minf=0 00:26:27.205 IO depths : 1=0.6%, 2=1.5%, 4=7.7%, 8=76.7%, 16=13.5%, 32=0.0%, >=64=0.0% 00:26:27.205 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:27.205 complete : 0=0.0%, 4=89.7%, 8=6.2%, 16=4.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:27.205 issued rwts: total=2981,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:27.205 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:27.205 filename1: (groupid=0, jobs=1): err= 0: pid=102708: Sun Dec 1 15:06:58 2024 00:26:27.205 read: IOPS=276, BW=1106KiB/s (1132kB/s)(10.8MiB/10002msec) 00:26:27.205 slat (usec): min=4, max=4014, avg=15.24, stdev=107.72 00:26:27.205 clat (msec): min=10, max=146, avg=57.77, stdev=19.11 00:26:27.205 lat (msec): min=10, max=146, avg=57.78, stdev=19.11 00:26:27.205 clat percentiles (msec): 00:26:27.205 | 1.00th=[ 12], 5.00th=[ 33], 10.00th=[ 36], 20.00th=[ 41], 00:26:27.205 | 30.00th=[ 48], 40.00th=[ 54], 50.00th=[ 56], 60.00th=[ 61], 00:26:27.205 | 70.00th=[ 66], 80.00th=[ 72], 90.00th=[ 83], 95.00th=[ 92], 00:26:27.205 | 99.00th=[ 109], 99.50th=[ 121], 99.90th=[ 146], 99.95th=[ 146], 00:26:27.205 | 99.99th=[ 146] 00:26:27.205 bw ( KiB/s): min= 768, max= 1536, per=4.31%, avg=1103.58, stdev=177.90, samples=19 00:26:27.205 iops : min= 192, max= 384, avg=275.89, stdev=44.47, samples=19 00:26:27.205 lat (msec) : 20=2.31%, 50=32.51%, 100=62.39%, 250=2.78% 00:26:27.205 cpu : usr=43.61%, sys=0.60%, ctx=1224, majf=0, minf=9 00:26:27.205 IO depths : 1=1.9%, 2=3.9%, 4=11.7%, 8=70.7%, 16=11.8%, 32=0.0%, >=64=0.0% 00:26:27.205 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:27.205 complete : 0=0.0%, 4=90.6%, 8=4.9%, 16=4.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:27.205 issued rwts: total=2765,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:27.205 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:27.205 filename1: (groupid=0, jobs=1): err= 0: pid=102709: Sun Dec 1 15:06:58 2024 00:26:27.205 read: IOPS=250, BW=1003KiB/s (1027kB/s)(9.82MiB/10033msec) 00:26:27.205 slat (usec): min=3, max=11038, avg=25.64, stdev=335.57 00:26:27.205 clat (msec): min=32, max=121, avg=63.68, stdev=15.72 00:26:27.205 lat (msec): min=32, max=121, avg=63.70, stdev=15.72 00:26:27.205 clat percentiles (msec): 00:26:27.205 | 1.00th=[ 34], 5.00th=[ 38], 10.00th=[ 45], 20.00th=[ 51], 00:26:27.205 | 30.00th=[ 56], 40.00th=[ 59], 50.00th=[ 61], 60.00th=[ 67], 00:26:27.205 | 70.00th=[ 72], 80.00th=[ 77], 90.00th=[ 84], 95.00th=[ 93], 00:26:27.205 | 99.00th=[ 111], 99.50th=[ 111], 99.90th=[ 122], 99.95th=[ 122], 00:26:27.205 | 99.99th=[ 122] 00:26:27.205 bw ( KiB/s): min= 768, max= 1184, per=3.90%, avg=999.75, stdev=115.35, samples=20 00:26:27.205 iops : min= 192, max= 296, avg=249.90, stdev=28.83, samples=20 00:26:27.205 lat (msec) : 50=19.60%, 100=78.25%, 250=2.15% 00:26:27.205 cpu : usr=33.00%, sys=0.41%, ctx=1013, majf=0, minf=9 00:26:27.205 IO depths : 1=2.1%, 2=4.9%, 4=14.2%, 8=67.6%, 16=11.3%, 32=0.0%, >=64=0.0% 00:26:27.205 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:27.205 complete : 0=0.0%, 4=91.2%, 8=4.0%, 16=4.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:27.205 issued rwts: total=2515,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:27.205 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:27.205 filename1: (groupid=0, jobs=1): err= 0: pid=102710: Sun Dec 1 15:06:58 2024 00:26:27.205 read: IOPS=248, BW=993KiB/s (1017kB/s)(9960KiB/10033msec) 00:26:27.205 slat (usec): min=5, max=8029, avg=18.79, stdev=201.34 00:26:27.205 clat (msec): min=24, max=135, avg=64.31, stdev=19.00 00:26:27.205 lat (msec): min=24, max=135, avg=64.33, stdev=19.00 00:26:27.205 clat percentiles (msec): 00:26:27.205 | 1.00th=[ 33], 5.00th=[ 37], 10.00th=[ 45], 20.00th=[ 49], 00:26:27.205 | 30.00th=[ 55], 40.00th=[ 58], 50.00th=[ 61], 60.00th=[ 64], 00:26:27.205 | 70.00th=[ 72], 80.00th=[ 81], 90.00th=[ 92], 95.00th=[ 102], 00:26:27.205 | 99.00th=[ 118], 99.50th=[ 124], 99.90th=[ 136], 99.95th=[ 136], 00:26:27.205 | 99.99th=[ 136] 00:26:27.205 bw ( KiB/s): min= 640, max= 1248, per=3.86%, avg=987.68, stdev=162.90, samples=19 00:26:27.205 iops : min= 160, max= 312, avg=246.89, stdev=40.73, samples=19 00:26:27.205 lat (msec) : 50=24.18%, 100=70.24%, 250=5.58% 00:26:27.205 cpu : usr=35.24%, sys=0.54%, ctx=1001, majf=0, minf=9 00:26:27.205 IO depths : 1=1.3%, 2=2.9%, 4=11.6%, 8=72.0%, 16=12.1%, 32=0.0%, >=64=0.0% 00:26:27.205 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:27.205 complete : 0=0.0%, 4=90.2%, 8=5.1%, 16=4.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:27.205 issued rwts: total=2490,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:27.205 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:27.205 filename1: (groupid=0, jobs=1): err= 0: pid=102711: Sun Dec 1 15:06:58 2024 00:26:27.205 read: IOPS=288, BW=1154KiB/s (1182kB/s)(11.3MiB/10017msec) 00:26:27.205 slat (usec): min=4, max=8035, avg=23.24, stdev=278.91 00:26:27.205 clat (msec): min=11, max=124, avg=55.25, stdev=18.96 00:26:27.206 lat (msec): min=11, max=124, avg=55.28, stdev=18.97 00:26:27.206 clat percentiles (msec): 00:26:27.206 | 1.00th=[ 15], 5.00th=[ 32], 10.00th=[ 35], 20.00th=[ 38], 00:26:27.206 | 30.00th=[ 45], 40.00th=[ 48], 50.00th=[ 55], 60.00th=[ 59], 00:26:27.206 | 70.00th=[ 62], 80.00th=[ 71], 90.00th=[ 81], 95.00th=[ 93], 00:26:27.206 | 99.00th=[ 116], 99.50th=[ 123], 99.90th=[ 125], 99.95th=[ 125], 00:26:27.206 | 99.99th=[ 125] 00:26:27.206 bw ( KiB/s): min= 736, max= 1560, per=4.51%, avg=1153.85, stdev=216.46, samples=20 00:26:27.206 iops : min= 184, max= 390, avg=288.45, stdev=54.12, samples=20 00:26:27.206 lat (msec) : 20=1.66%, 50=42.62%, 100=53.34%, 250=2.39% 00:26:27.206 cpu : usr=40.49%, sys=0.50%, ctx=1145, majf=0, minf=9 00:26:27.206 IO depths : 1=1.0%, 2=2.1%, 4=8.9%, 8=75.3%, 16=12.6%, 32=0.0%, >=64=0.0% 00:26:27.206 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:27.206 complete : 0=0.0%, 4=89.8%, 8=5.8%, 16=4.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:27.206 issued rwts: total=2891,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:27.206 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:27.206 filename1: (groupid=0, jobs=1): err= 0: pid=102712: Sun Dec 1 15:06:58 2024 00:26:27.206 read: IOPS=260, BW=1040KiB/s (1065kB/s)(10.2MiB/10014msec) 00:26:27.206 slat (usec): min=6, max=8025, avg=15.08, stdev=157.20 00:26:27.206 clat (msec): min=24, max=142, avg=61.39, stdev=17.37 00:26:27.206 lat (msec): min=24, max=142, avg=61.41, stdev=17.37 00:26:27.206 clat percentiles (msec): 00:26:27.206 | 1.00th=[ 33], 5.00th=[ 36], 10.00th=[ 37], 20.00th=[ 48], 00:26:27.206 | 30.00th=[ 52], 40.00th=[ 59], 50.00th=[ 61], 60.00th=[ 62], 00:26:27.206 | 70.00th=[ 70], 80.00th=[ 72], 90.00th=[ 85], 95.00th=[ 94], 00:26:27.206 | 99.00th=[ 115], 99.50th=[ 120], 99.90th=[ 144], 99.95th=[ 144], 00:26:27.206 | 99.99th=[ 144] 00:26:27.206 bw ( KiB/s): min= 696, max= 1280, per=4.06%, avg=1039.20, stdev=147.79, samples=20 00:26:27.206 iops : min= 174, max= 320, avg=259.75, stdev=36.97, samples=20 00:26:27.206 lat (msec) : 50=27.73%, 100=69.74%, 250=2.53% 00:26:27.206 cpu : usr=34.08%, sys=0.39%, ctx=875, majf=0, minf=9 00:26:27.206 IO depths : 1=1.2%, 2=2.8%, 4=9.9%, 8=73.9%, 16=12.1%, 32=0.0%, >=64=0.0% 00:26:27.206 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:27.206 complete : 0=0.0%, 4=90.1%, 8=5.1%, 16=4.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:27.206 issued rwts: total=2604,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:27.206 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:27.206 filename2: (groupid=0, jobs=1): err= 0: pid=102713: Sun Dec 1 15:06:58 2024 00:26:27.206 read: IOPS=247, BW=990KiB/s (1014kB/s)(9904KiB/10004msec) 00:26:27.206 slat (usec): min=4, max=8022, avg=19.92, stdev=201.48 00:26:27.206 clat (msec): min=3, max=132, avg=64.49, stdev=18.96 00:26:27.206 lat (msec): min=4, max=132, avg=64.51, stdev=18.96 00:26:27.206 clat percentiles (msec): 00:26:27.206 | 1.00th=[ 10], 5.00th=[ 37], 10.00th=[ 45], 20.00th=[ 51], 00:26:27.206 | 30.00th=[ 56], 40.00th=[ 58], 50.00th=[ 62], 60.00th=[ 67], 00:26:27.206 | 70.00th=[ 72], 80.00th=[ 81], 90.00th=[ 88], 95.00th=[ 101], 00:26:27.206 | 99.00th=[ 118], 99.50th=[ 122], 99.90th=[ 133], 99.95th=[ 133], 00:26:27.206 | 99.99th=[ 133] 00:26:27.206 bw ( KiB/s): min= 656, max= 1248, per=3.81%, avg=975.26, stdev=152.02, samples=19 00:26:27.206 iops : min= 164, max= 312, avg=243.79, stdev=37.98, samples=19 00:26:27.206 lat (msec) : 4=0.04%, 10=1.25%, 50=17.12%, 100=77.18%, 250=4.40% 00:26:27.206 cpu : usr=42.95%, sys=0.52%, ctx=1244, majf=0, minf=9 00:26:27.206 IO depths : 1=2.6%, 2=5.9%, 4=15.6%, 8=65.1%, 16=10.8%, 32=0.0%, >=64=0.0% 00:26:27.206 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:27.206 complete : 0=0.0%, 4=91.7%, 8=3.5%, 16=4.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:27.206 issued rwts: total=2476,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:27.206 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:27.206 filename2: (groupid=0, jobs=1): err= 0: pid=102714: Sun Dec 1 15:06:58 2024 00:26:27.206 read: IOPS=246, BW=984KiB/s (1008kB/s)(9844KiB/10001msec) 00:26:27.206 slat (usec): min=4, max=8020, avg=18.25, stdev=202.13 00:26:27.206 clat (usec): min=1626, max=144865, avg=64902.52, stdev=19721.53 00:26:27.206 lat (usec): min=1633, max=144895, avg=64920.78, stdev=19721.95 00:26:27.206 clat percentiles (msec): 00:26:27.206 | 1.00th=[ 4], 5.00th=[ 35], 10.00th=[ 46], 20.00th=[ 51], 00:26:27.206 | 30.00th=[ 58], 40.00th=[ 60], 50.00th=[ 61], 60.00th=[ 71], 00:26:27.206 | 70.00th=[ 72], 80.00th=[ 83], 90.00th=[ 87], 95.00th=[ 95], 00:26:27.206 | 99.00th=[ 113], 99.50th=[ 128], 99.90th=[ 146], 99.95th=[ 146], 00:26:27.206 | 99.99th=[ 146] 00:26:27.206 bw ( KiB/s): min= 656, max= 1152, per=3.72%, avg=951.32, stdev=121.83, samples=19 00:26:27.206 iops : min= 164, max= 288, avg=237.79, stdev=30.43, samples=19 00:26:27.206 lat (msec) : 2=0.65%, 4=1.30%, 10=0.65%, 50=16.25%, 100=77.65% 00:26:27.206 lat (msec) : 250=3.49% 00:26:27.206 cpu : usr=32.64%, sys=0.57%, ctx=952, majf=0, minf=9 00:26:27.206 IO depths : 1=1.8%, 2=4.3%, 4=13.0%, 8=69.0%, 16=11.9%, 32=0.0%, >=64=0.0% 00:26:27.206 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:27.206 complete : 0=0.0%, 4=91.0%, 8=4.6%, 16=4.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:27.206 issued rwts: total=2461,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:27.206 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:27.206 filename2: (groupid=0, jobs=1): err= 0: pid=102715: Sun Dec 1 15:06:58 2024 00:26:27.206 read: IOPS=257, BW=1029KiB/s (1054kB/s)(10.1MiB/10021msec) 00:26:27.206 slat (usec): min=4, max=8032, avg=18.72, stdev=223.30 00:26:27.206 clat (msec): min=26, max=109, avg=62.02, stdev=15.20 00:26:27.206 lat (msec): min=26, max=109, avg=62.04, stdev=15.20 00:26:27.206 clat percentiles (msec): 00:26:27.206 | 1.00th=[ 32], 5.00th=[ 39], 10.00th=[ 45], 20.00th=[ 50], 00:26:27.206 | 30.00th=[ 55], 40.00th=[ 59], 50.00th=[ 60], 60.00th=[ 63], 00:26:27.206 | 70.00th=[ 70], 80.00th=[ 73], 90.00th=[ 84], 95.00th=[ 89], 00:26:27.206 | 99.00th=[ 105], 99.50th=[ 108], 99.90th=[ 110], 99.95th=[ 110], 00:26:27.206 | 99.99th=[ 110] 00:26:27.206 bw ( KiB/s): min= 832, max= 1136, per=3.90%, avg=999.89, stdev=99.09, samples=19 00:26:27.206 iops : min= 208, max= 284, avg=249.95, stdev=24.79, samples=19 00:26:27.206 lat (msec) : 50=23.11%, 100=75.34%, 250=1.55% 00:26:27.206 cpu : usr=35.25%, sys=0.32%, ctx=1045, majf=0, minf=9 00:26:27.206 IO depths : 1=1.2%, 2=3.2%, 4=12.3%, 8=70.9%, 16=12.5%, 32=0.0%, >=64=0.0% 00:26:27.206 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:27.206 complete : 0=0.0%, 4=90.8%, 8=4.6%, 16=4.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:27.206 issued rwts: total=2579,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:27.206 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:27.206 filename2: (groupid=0, jobs=1): err= 0: pid=102716: Sun Dec 1 15:06:58 2024 00:26:27.206 read: IOPS=252, BW=1011KiB/s (1036kB/s)(9.89MiB/10017msec) 00:26:27.206 slat (usec): min=4, max=8029, avg=22.29, stdev=275.72 00:26:27.206 clat (msec): min=25, max=169, avg=63.13, stdev=19.69 00:26:27.206 lat (msec): min=25, max=169, avg=63.15, stdev=19.70 00:26:27.206 clat percentiles (msec): 00:26:27.206 | 1.00th=[ 33], 5.00th=[ 36], 10.00th=[ 39], 20.00th=[ 48], 00:26:27.206 | 30.00th=[ 51], 40.00th=[ 59], 50.00th=[ 61], 60.00th=[ 63], 00:26:27.206 | 70.00th=[ 72], 80.00th=[ 80], 90.00th=[ 91], 95.00th=[ 96], 00:26:27.206 | 99.00th=[ 129], 99.50th=[ 131], 99.90th=[ 169], 99.95th=[ 169], 00:26:27.206 | 99.99th=[ 169] 00:26:27.206 bw ( KiB/s): min= 640, max= 1408, per=3.90%, avg=998.63, stdev=181.95, samples=19 00:26:27.206 iops : min= 160, max= 352, avg=249.63, stdev=45.52, samples=19 00:26:27.206 lat (msec) : 50=28.50%, 100=67.39%, 250=4.11% 00:26:27.206 cpu : usr=34.03%, sys=0.46%, ctx=885, majf=0, minf=9 00:26:27.206 IO depths : 1=1.7%, 2=4.0%, 4=12.3%, 8=70.4%, 16=11.6%, 32=0.0%, >=64=0.0% 00:26:27.206 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:27.206 complete : 0=0.0%, 4=90.8%, 8=4.3%, 16=4.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:27.206 issued rwts: total=2533,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:27.206 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:27.206 filename2: (groupid=0, jobs=1): err= 0: pid=102717: Sun Dec 1 15:06:58 2024 00:26:27.206 read: IOPS=256, BW=1025KiB/s (1049kB/s)(10.0MiB/10001msec) 00:26:27.206 slat (usec): min=4, max=8021, avg=17.42, stdev=177.30 00:26:27.206 clat (msec): min=3, max=130, avg=62.33, stdev=17.83 00:26:27.206 lat (msec): min=3, max=130, avg=62.35, stdev=17.83 00:26:27.206 clat percentiles (msec): 00:26:27.206 | 1.00th=[ 5], 5.00th=[ 37], 10.00th=[ 45], 20.00th=[ 52], 00:26:27.206 | 30.00th=[ 55], 40.00th=[ 57], 50.00th=[ 59], 60.00th=[ 63], 00:26:27.206 | 70.00th=[ 69], 80.00th=[ 77], 90.00th=[ 86], 95.00th=[ 93], 00:26:27.206 | 99.00th=[ 111], 99.50th=[ 117], 99.90th=[ 131], 99.95th=[ 131], 00:26:27.206 | 99.99th=[ 131] 00:26:27.206 bw ( KiB/s): min= 696, max= 1280, per=3.90%, avg=997.42, stdev=147.25, samples=19 00:26:27.206 iops : min= 174, max= 320, avg=249.32, stdev=36.78, samples=19 00:26:27.206 lat (msec) : 4=0.62%, 10=1.25%, 50=16.24%, 100=79.66%, 250=2.22% 00:26:27.206 cpu : usr=46.43%, sys=0.52%, ctx=1430, majf=0, minf=9 00:26:27.206 IO depths : 1=2.8%, 2=6.4%, 4=16.3%, 8=64.2%, 16=10.2%, 32=0.0%, >=64=0.0% 00:26:27.206 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:27.206 complete : 0=0.0%, 4=91.9%, 8=2.9%, 16=5.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:27.206 issued rwts: total=2562,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:27.206 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:27.206 filename2: (groupid=0, jobs=1): err= 0: pid=102719: Sun Dec 1 15:06:58 2024 00:26:27.206 read: IOPS=276, BW=1105KiB/s (1131kB/s)(10.8MiB/10032msec) 00:26:27.206 slat (usec): min=4, max=4029, avg=13.94, stdev=76.71 00:26:27.206 clat (msec): min=14, max=144, avg=57.76, stdev=16.70 00:26:27.206 lat (msec): min=14, max=144, avg=57.77, stdev=16.70 00:26:27.206 clat percentiles (msec): 00:26:27.207 | 1.00th=[ 24], 5.00th=[ 34], 10.00th=[ 37], 20.00th=[ 44], 00:26:27.207 | 30.00th=[ 50], 40.00th=[ 55], 50.00th=[ 57], 60.00th=[ 60], 00:26:27.207 | 70.00th=[ 64], 80.00th=[ 72], 90.00th=[ 81], 95.00th=[ 87], 00:26:27.207 | 99.00th=[ 103], 99.50th=[ 121], 99.90th=[ 144], 99.95th=[ 144], 00:26:27.207 | 99.99th=[ 144] 00:26:27.207 bw ( KiB/s): min= 824, max= 1584, per=4.30%, avg=1101.50, stdev=182.68, samples=20 00:26:27.207 iops : min= 206, max= 396, avg=275.35, stdev=45.66, samples=20 00:26:27.207 lat (msec) : 20=0.58%, 50=30.24%, 100=67.88%, 250=1.30% 00:26:27.207 cpu : usr=40.74%, sys=0.56%, ctx=1184, majf=0, minf=9 00:26:27.207 IO depths : 1=1.6%, 2=3.4%, 4=11.0%, 8=71.7%, 16=12.3%, 32=0.0%, >=64=0.0% 00:26:27.207 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:27.207 complete : 0=0.0%, 4=90.3%, 8=5.5%, 16=4.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:27.207 issued rwts: total=2771,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:27.207 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:27.207 filename2: (groupid=0, jobs=1): err= 0: pid=102720: Sun Dec 1 15:06:58 2024 00:26:27.207 read: IOPS=250, BW=1003KiB/s (1027kB/s)(9.80MiB/10007msec) 00:26:27.207 slat (usec): min=5, max=8046, avg=28.43, stdev=357.55 00:26:27.207 clat (msec): min=28, max=119, avg=63.62, stdev=16.41 00:26:27.207 lat (msec): min=28, max=119, avg=63.65, stdev=16.41 00:26:27.207 clat percentiles (msec): 00:26:27.207 | 1.00th=[ 35], 5.00th=[ 39], 10.00th=[ 46], 20.00th=[ 51], 00:26:27.207 | 30.00th=[ 56], 40.00th=[ 58], 50.00th=[ 61], 60.00th=[ 64], 00:26:27.207 | 70.00th=[ 72], 80.00th=[ 74], 90.00th=[ 87], 95.00th=[ 96], 00:26:27.207 | 99.00th=[ 108], 99.50th=[ 111], 99.90th=[ 120], 99.95th=[ 120], 00:26:27.207 | 99.99th=[ 120] 00:26:27.207 bw ( KiB/s): min= 768, max= 1168, per=3.88%, avg=993.16, stdev=105.18, samples=19 00:26:27.207 iops : min= 192, max= 292, avg=248.26, stdev=26.30, samples=19 00:26:27.207 lat (msec) : 50=19.73%, 100=77.16%, 250=3.11% 00:26:27.207 cpu : usr=37.24%, sys=0.58%, ctx=1005, majf=0, minf=9 00:26:27.207 IO depths : 1=1.4%, 2=3.3%, 4=11.2%, 8=71.5%, 16=12.6%, 32=0.0%, >=64=0.0% 00:26:27.207 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:27.207 complete : 0=0.0%, 4=90.4%, 8=5.3%, 16=4.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:27.207 issued rwts: total=2509,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:27.207 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:27.207 filename2: (groupid=0, jobs=1): err= 0: pid=102721: Sun Dec 1 15:06:58 2024 00:26:27.207 read: IOPS=319, BW=1277KiB/s (1307kB/s)(12.5MiB/10033msec) 00:26:27.207 slat (usec): min=6, max=7017, avg=17.84, stdev=194.20 00:26:27.207 clat (msec): min=2, max=119, avg=49.98, stdev=18.92 00:26:27.207 lat (msec): min=2, max=119, avg=49.99, stdev=18.92 00:26:27.207 clat percentiles (msec): 00:26:27.207 | 1.00th=[ 4], 5.00th=[ 24], 10.00th=[ 32], 20.00th=[ 37], 00:26:27.207 | 30.00th=[ 41], 40.00th=[ 44], 50.00th=[ 48], 60.00th=[ 55], 00:26:27.207 | 70.00th=[ 58], 80.00th=[ 65], 90.00th=[ 72], 95.00th=[ 82], 00:26:27.207 | 99.00th=[ 102], 99.50th=[ 113], 99.90th=[ 120], 99.95th=[ 121], 00:26:27.207 | 99.99th=[ 121] 00:26:27.207 bw ( KiB/s): min= 768, max= 2671, per=4.97%, avg=1273.75, stdev=383.57, samples=20 00:26:27.207 iops : min= 192, max= 667, avg=318.35, stdev=95.78, samples=20 00:26:27.207 lat (msec) : 4=1.50%, 10=2.00%, 20=1.50%, 50=49.31%, 100=44.22% 00:26:27.207 lat (msec) : 250=1.47% 00:26:27.207 cpu : usr=45.38%, sys=0.95%, ctx=1709, majf=0, minf=0 00:26:27.207 IO depths : 1=1.3%, 2=2.8%, 4=9.6%, 8=74.0%, 16=12.2%, 32=0.0%, >=64=0.0% 00:26:27.207 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:27.207 complete : 0=0.0%, 4=90.1%, 8=5.3%, 16=4.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:27.207 issued rwts: total=3202,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:27.207 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:27.207 00:26:27.207 Run status group 0 (all jobs): 00:26:27.207 READ: bw=25.0MiB/s (26.2MB/s), 960KiB/s-1277KiB/s (983kB/s-1307kB/s), io=251MiB (263MB), run=10001-10042msec 00:26:27.207 15:06:58 -- target/dif.sh@113 -- # destroy_subsystems 0 1 2 00:26:27.207 15:06:58 -- target/dif.sh@43 -- # local sub 00:26:27.207 15:06:58 -- target/dif.sh@45 -- # for sub in "$@" 00:26:27.207 15:06:58 -- target/dif.sh@46 -- # destroy_subsystem 0 00:26:27.207 15:06:58 -- target/dif.sh@36 -- # local sub_id=0 00:26:27.207 15:06:58 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:26:27.207 15:06:58 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:27.207 15:06:58 -- common/autotest_common.sh@10 -- # set +x 00:26:27.207 15:06:58 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:27.207 15:06:58 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:26:27.207 15:06:58 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:27.207 15:06:58 -- common/autotest_common.sh@10 -- # set +x 00:26:27.207 15:06:58 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:27.207 15:06:58 -- target/dif.sh@45 -- # for sub in "$@" 00:26:27.207 15:06:58 -- target/dif.sh@46 -- # destroy_subsystem 1 00:26:27.207 15:06:58 -- target/dif.sh@36 -- # local sub_id=1 00:26:27.207 15:06:58 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:26:27.207 15:06:58 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:27.207 15:06:58 -- common/autotest_common.sh@10 -- # set +x 00:26:27.207 15:06:58 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:27.207 15:06:58 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:26:27.207 15:06:58 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:27.207 15:06:58 -- common/autotest_common.sh@10 -- # set +x 00:26:27.207 15:06:58 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:27.207 15:06:58 -- target/dif.sh@45 -- # for sub in "$@" 00:26:27.207 15:06:58 -- target/dif.sh@46 -- # destroy_subsystem 2 00:26:27.207 15:06:58 -- target/dif.sh@36 -- # local sub_id=2 00:26:27.207 15:06:58 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:26:27.207 15:06:58 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:27.207 15:06:58 -- common/autotest_common.sh@10 -- # set +x 00:26:27.207 15:06:58 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:27.207 15:06:58 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null2 00:26:27.207 15:06:58 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:27.207 15:06:58 -- common/autotest_common.sh@10 -- # set +x 00:26:27.207 15:06:58 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:27.207 15:06:58 -- target/dif.sh@115 -- # NULL_DIF=1 00:26:27.207 15:06:58 -- target/dif.sh@115 -- # bs=8k,16k,128k 00:26:27.207 15:06:58 -- target/dif.sh@115 -- # numjobs=2 00:26:27.207 15:06:58 -- target/dif.sh@115 -- # iodepth=8 00:26:27.207 15:06:58 -- target/dif.sh@115 -- # runtime=5 00:26:27.207 15:06:58 -- target/dif.sh@115 -- # files=1 00:26:27.207 15:06:58 -- target/dif.sh@117 -- # create_subsystems 0 1 00:26:27.207 15:06:58 -- target/dif.sh@28 -- # local sub 00:26:27.207 15:06:58 -- target/dif.sh@30 -- # for sub in "$@" 00:26:27.207 15:06:58 -- target/dif.sh@31 -- # create_subsystem 0 00:26:27.207 15:06:58 -- target/dif.sh@18 -- # local sub_id=0 00:26:27.207 15:06:58 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:26:27.207 15:06:58 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:27.207 15:06:58 -- common/autotest_common.sh@10 -- # set +x 00:26:27.207 bdev_null0 00:26:27.207 15:06:58 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:27.207 15:06:58 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:26:27.207 15:06:58 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:27.207 15:06:58 -- common/autotest_common.sh@10 -- # set +x 00:26:27.207 15:06:58 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:27.207 15:06:58 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:26:27.207 15:06:58 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:27.207 15:06:58 -- common/autotest_common.sh@10 -- # set +x 00:26:27.207 15:06:58 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:27.207 15:06:58 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:26:27.207 15:06:58 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:27.207 15:06:58 -- common/autotest_common.sh@10 -- # set +x 00:26:27.207 [2024-12-01 15:06:58.567085] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:27.207 15:06:58 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:27.207 15:06:58 -- target/dif.sh@30 -- # for sub in "$@" 00:26:27.207 15:06:58 -- target/dif.sh@31 -- # create_subsystem 1 00:26:27.207 15:06:58 -- target/dif.sh@18 -- # local sub_id=1 00:26:27.207 15:06:58 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:26:27.207 15:06:58 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:27.207 15:06:58 -- common/autotest_common.sh@10 -- # set +x 00:26:27.207 bdev_null1 00:26:27.207 15:06:58 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:27.207 15:06:58 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:26:27.207 15:06:58 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:27.207 15:06:58 -- common/autotest_common.sh@10 -- # set +x 00:26:27.207 15:06:58 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:27.207 15:06:58 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:26:27.207 15:06:58 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:27.207 15:06:58 -- common/autotest_common.sh@10 -- # set +x 00:26:27.207 15:06:58 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:27.207 15:06:58 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:27.207 15:06:58 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:27.207 15:06:58 -- common/autotest_common.sh@10 -- # set +x 00:26:27.207 15:06:58 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:27.207 15:06:58 -- target/dif.sh@118 -- # fio /dev/fd/62 00:26:27.207 15:06:58 -- target/dif.sh@118 -- # create_json_sub_conf 0 1 00:26:27.207 15:06:58 -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:26:27.207 15:06:58 -- target/dif.sh@82 -- # gen_fio_conf 00:26:27.207 15:06:58 -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:26:27.207 15:06:58 -- nvmf/common.sh@520 -- # config=() 00:26:27.207 15:06:58 -- target/dif.sh@54 -- # local file 00:26:27.207 15:06:58 -- nvmf/common.sh@520 -- # local subsystem config 00:26:27.207 15:06:58 -- common/autotest_common.sh@1345 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:26:27.207 15:06:58 -- target/dif.sh@56 -- # cat 00:26:27.207 15:06:58 -- common/autotest_common.sh@1326 -- # local fio_dir=/usr/src/fio 00:26:27.208 15:06:58 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:26:27.208 15:06:58 -- common/autotest_common.sh@1328 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:26:27.208 15:06:58 -- common/autotest_common.sh@1328 -- # local sanitizers 00:26:27.208 15:06:58 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:26:27.208 { 00:26:27.208 "params": { 00:26:27.208 "name": "Nvme$subsystem", 00:26:27.208 "trtype": "$TEST_TRANSPORT", 00:26:27.208 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:27.208 "adrfam": "ipv4", 00:26:27.208 "trsvcid": "$NVMF_PORT", 00:26:27.208 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:27.208 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:27.208 "hdgst": ${hdgst:-false}, 00:26:27.208 "ddgst": ${ddgst:-false} 00:26:27.208 }, 00:26:27.208 "method": "bdev_nvme_attach_controller" 00:26:27.208 } 00:26:27.208 EOF 00:26:27.208 )") 00:26:27.208 15:06:58 -- common/autotest_common.sh@1329 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:26:27.208 15:06:58 -- common/autotest_common.sh@1330 -- # shift 00:26:27.208 15:06:58 -- common/autotest_common.sh@1332 -- # local asan_lib= 00:26:27.208 15:06:58 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:26:27.208 15:06:58 -- nvmf/common.sh@542 -- # cat 00:26:27.208 15:06:58 -- target/dif.sh@72 -- # (( file = 1 )) 00:26:27.208 15:06:58 -- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:26:27.208 15:06:58 -- target/dif.sh@72 -- # (( file <= files )) 00:26:27.208 15:06:58 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:26:27.208 15:06:58 -- target/dif.sh@73 -- # cat 00:26:27.208 15:06:58 -- common/autotest_common.sh@1334 -- # grep libasan 00:26:27.208 15:06:58 -- target/dif.sh@72 -- # (( file++ )) 00:26:27.208 15:06:58 -- target/dif.sh@72 -- # (( file <= files )) 00:26:27.208 15:06:58 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:26:27.208 15:06:58 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:26:27.208 { 00:26:27.208 "params": { 00:26:27.208 "name": "Nvme$subsystem", 00:26:27.208 "trtype": "$TEST_TRANSPORT", 00:26:27.208 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:27.208 "adrfam": "ipv4", 00:26:27.208 "trsvcid": "$NVMF_PORT", 00:26:27.208 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:27.208 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:27.208 "hdgst": ${hdgst:-false}, 00:26:27.208 "ddgst": ${ddgst:-false} 00:26:27.208 }, 00:26:27.208 "method": "bdev_nvme_attach_controller" 00:26:27.208 } 00:26:27.208 EOF 00:26:27.208 )") 00:26:27.208 15:06:58 -- nvmf/common.sh@542 -- # cat 00:26:27.208 15:06:58 -- nvmf/common.sh@544 -- # jq . 00:26:27.208 15:06:58 -- nvmf/common.sh@545 -- # IFS=, 00:26:27.208 15:06:58 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:26:27.208 "params": { 00:26:27.208 "name": "Nvme0", 00:26:27.208 "trtype": "tcp", 00:26:27.208 "traddr": "10.0.0.2", 00:26:27.208 "adrfam": "ipv4", 00:26:27.208 "trsvcid": "4420", 00:26:27.208 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:26:27.208 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:26:27.208 "hdgst": false, 00:26:27.208 "ddgst": false 00:26:27.208 }, 00:26:27.208 "method": "bdev_nvme_attach_controller" 00:26:27.208 },{ 00:26:27.208 "params": { 00:26:27.208 "name": "Nvme1", 00:26:27.208 "trtype": "tcp", 00:26:27.208 "traddr": "10.0.0.2", 00:26:27.208 "adrfam": "ipv4", 00:26:27.208 "trsvcid": "4420", 00:26:27.208 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:26:27.208 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:26:27.208 "hdgst": false, 00:26:27.208 "ddgst": false 00:26:27.208 }, 00:26:27.208 "method": "bdev_nvme_attach_controller" 00:26:27.208 }' 00:26:27.208 15:06:58 -- common/autotest_common.sh@1334 -- # asan_lib= 00:26:27.208 15:06:58 -- common/autotest_common.sh@1335 -- # [[ -n '' ]] 00:26:27.208 15:06:58 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:26:27.208 15:06:58 -- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:26:27.208 15:06:58 -- common/autotest_common.sh@1334 -- # grep libclang_rt.asan 00:26:27.208 15:06:58 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:26:27.208 15:06:58 -- common/autotest_common.sh@1334 -- # asan_lib= 00:26:27.208 15:06:58 -- common/autotest_common.sh@1335 -- # [[ -n '' ]] 00:26:27.208 15:06:58 -- common/autotest_common.sh@1341 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:26:27.208 15:06:58 -- common/autotest_common.sh@1341 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:26:27.208 filename0: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:26:27.208 ... 00:26:27.208 filename1: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:26:27.208 ... 00:26:27.208 fio-3.35 00:26:27.208 Starting 4 threads 00:26:27.208 [2024-12-01 15:06:59.318712] rpc.c: 181:spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:26:27.208 [2024-12-01 15:06:59.318788] rpc.c: 90:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:26:31.395 00:26:31.395 filename0: (groupid=0, jobs=1): err= 0: pid=102853: Sun Dec 1 15:07:04 2024 00:26:31.395 read: IOPS=2311, BW=18.1MiB/s (18.9MB/s)(90.3MiB/5002msec) 00:26:31.395 slat (usec): min=5, max=102, avg=21.31, stdev=11.79 00:26:31.395 clat (usec): min=863, max=9876, avg=3358.02, stdev=382.17 00:26:31.395 lat (usec): min=869, max=9905, avg=3379.33, stdev=382.45 00:26:31.395 clat percentiles (usec): 00:26:31.395 | 1.00th=[ 2966], 5.00th=[ 3097], 10.00th=[ 3130], 20.00th=[ 3195], 00:26:31.395 | 30.00th=[ 3228], 40.00th=[ 3261], 50.00th=[ 3294], 60.00th=[ 3326], 00:26:31.395 | 70.00th=[ 3359], 80.00th=[ 3425], 90.00th=[ 3556], 95.00th=[ 3752], 00:26:31.395 | 99.00th=[ 5145], 99.50th=[ 5145], 99.90th=[ 6915], 99.95th=[ 8455], 00:26:31.395 | 99.99th=[ 8586] 00:26:31.395 bw ( KiB/s): min=15360, max=19200, per=24.98%, avg=18480.00, stdev=1204.90, samples=9 00:26:31.395 iops : min= 1920, max= 2400, avg=2310.00, stdev=150.61, samples=9 00:26:31.395 lat (usec) : 1000=0.03% 00:26:31.395 lat (msec) : 2=0.06%, 4=96.46%, 10=3.45% 00:26:31.395 cpu : usr=95.10%, sys=3.56%, ctx=5, majf=0, minf=0 00:26:31.395 IO depths : 1=9.9%, 2=24.4%, 4=50.6%, 8=15.1%, 16=0.0%, 32=0.0%, >=64=0.0% 00:26:31.395 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:31.395 complete : 0=0.0%, 4=89.2%, 8=10.8%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:31.395 issued rwts: total=11563,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:31.395 latency : target=0, window=0, percentile=100.00%, depth=8 00:26:31.395 filename0: (groupid=0, jobs=1): err= 0: pid=102854: Sun Dec 1 15:07:04 2024 00:26:31.395 read: IOPS=2312, BW=18.1MiB/s (18.9MB/s)(90.3MiB/5001msec) 00:26:31.395 slat (usec): min=5, max=102, avg=21.46, stdev=11.61 00:26:31.395 clat (usec): min=912, max=9902, avg=3357.20, stdev=398.14 00:26:31.395 lat (usec): min=919, max=9919, avg=3378.67, stdev=398.65 00:26:31.395 clat percentiles (usec): 00:26:31.395 | 1.00th=[ 2900], 5.00th=[ 3097], 10.00th=[ 3130], 20.00th=[ 3195], 00:26:31.395 | 30.00th=[ 3228], 40.00th=[ 3261], 50.00th=[ 3294], 60.00th=[ 3326], 00:26:31.395 | 70.00th=[ 3359], 80.00th=[ 3425], 90.00th=[ 3556], 95.00th=[ 3752], 00:26:31.395 | 99.00th=[ 5080], 99.50th=[ 5211], 99.90th=[ 7504], 99.95th=[ 8455], 00:26:31.395 | 99.99th=[ 8717] 00:26:31.395 bw ( KiB/s): min=15408, max=19200, per=24.96%, avg=18465.78, stdev=1182.77, samples=9 00:26:31.395 iops : min= 1926, max= 2400, avg=2308.22, stdev=147.85, samples=9 00:26:31.395 lat (usec) : 1000=0.03% 00:26:31.395 lat (msec) : 2=0.02%, 4=96.36%, 10=3.60% 00:26:31.395 cpu : usr=95.20%, sys=3.52%, ctx=6, majf=0, minf=0 00:26:31.395 IO depths : 1=6.4%, 2=24.5%, 4=50.4%, 8=18.7%, 16=0.0%, 32=0.0%, >=64=0.0% 00:26:31.396 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:31.396 complete : 0=0.0%, 4=89.5%, 8=10.5%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:31.396 issued rwts: total=11563,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:31.396 latency : target=0, window=0, percentile=100.00%, depth=8 00:26:31.396 filename1: (groupid=0, jobs=1): err= 0: pid=102855: Sun Dec 1 15:07:04 2024 00:26:31.396 read: IOPS=2321, BW=18.1MiB/s (19.0MB/s)(90.7MiB/5003msec) 00:26:31.396 slat (usec): min=5, max=104, avg= 9.99, stdev= 6.78 00:26:31.396 clat (usec): min=829, max=8582, avg=3399.10, stdev=383.68 00:26:31.396 lat (usec): min=835, max=8590, avg=3409.08, stdev=384.16 00:26:31.396 clat percentiles (usec): 00:26:31.396 | 1.00th=[ 2868], 5.00th=[ 3195], 10.00th=[ 3228], 20.00th=[ 3261], 00:26:31.396 | 30.00th=[ 3294], 40.00th=[ 3294], 50.00th=[ 3326], 60.00th=[ 3359], 00:26:31.396 | 70.00th=[ 3392], 80.00th=[ 3458], 90.00th=[ 3589], 95.00th=[ 3785], 00:26:31.396 | 99.00th=[ 5145], 99.50th=[ 5211], 99.90th=[ 6915], 99.95th=[ 7373], 00:26:31.396 | 99.99th=[ 8586] 00:26:31.396 bw ( KiB/s): min=15519, max=19200, per=25.09%, avg=18559.89, stdev=1170.38, samples=9 00:26:31.396 iops : min= 1939, max= 2400, avg=2319.89, stdev=146.58, samples=9 00:26:31.396 lat (usec) : 1000=0.20% 00:26:31.396 lat (msec) : 2=0.12%, 4=96.32%, 10=3.36% 00:26:31.396 cpu : usr=95.92%, sys=2.90%, ctx=6, majf=0, minf=0 00:26:31.396 IO depths : 1=8.4%, 2=19.5%, 4=55.3%, 8=16.7%, 16=0.0%, 32=0.0%, >=64=0.0% 00:26:31.396 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:31.396 complete : 0=0.0%, 4=89.4%, 8=10.6%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:31.396 issued rwts: total=11614,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:31.396 latency : target=0, window=0, percentile=100.00%, depth=8 00:26:31.396 filename1: (groupid=0, jobs=1): err= 0: pid=102856: Sun Dec 1 15:07:04 2024 00:26:31.396 read: IOPS=2304, BW=18.0MiB/s (18.9MB/s)(90.1MiB/5002msec) 00:26:31.396 slat (usec): min=3, max=108, avg=17.55, stdev= 9.85 00:26:31.396 clat (usec): min=847, max=9996, avg=3391.25, stdev=412.75 00:26:31.396 lat (usec): min=853, max=10020, avg=3408.80, stdev=412.60 00:26:31.396 clat percentiles (usec): 00:26:31.396 | 1.00th=[ 3064], 5.00th=[ 3130], 10.00th=[ 3163], 20.00th=[ 3228], 00:26:31.396 | 30.00th=[ 3261], 40.00th=[ 3294], 50.00th=[ 3294], 60.00th=[ 3326], 00:26:31.396 | 70.00th=[ 3392], 80.00th=[ 3458], 90.00th=[ 3589], 95.00th=[ 3818], 00:26:31.396 | 99.00th=[ 5145], 99.50th=[ 5211], 99.90th=[ 8586], 99.95th=[ 8586], 00:26:31.396 | 99.99th=[ 9241] 00:26:31.396 bw ( KiB/s): min=15488, max=19200, per=24.87%, avg=18403.56, stdev=1139.12, samples=9 00:26:31.396 iops : min= 1936, max= 2400, avg=2300.44, stdev=142.39, samples=9 00:26:31.396 lat (usec) : 1000=0.05% 00:26:31.396 lat (msec) : 2=0.09%, 4=95.86%, 10=4.00% 00:26:31.396 cpu : usr=95.68%, sys=2.96%, ctx=11, majf=0, minf=9 00:26:31.396 IO depths : 1=10.5%, 2=23.6%, 4=51.3%, 8=14.6%, 16=0.0%, 32=0.0%, >=64=0.0% 00:26:31.396 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:31.396 complete : 0=0.0%, 4=89.2%, 8=10.8%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:31.396 issued rwts: total=11528,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:31.396 latency : target=0, window=0, percentile=100.00%, depth=8 00:26:31.396 00:26:31.396 Run status group 0 (all jobs): 00:26:31.396 READ: bw=72.2MiB/s (75.8MB/s), 18.0MiB/s-18.1MiB/s (18.9MB/s-19.0MB/s), io=361MiB (379MB), run=5001-5003msec 00:26:31.655 15:07:04 -- target/dif.sh@119 -- # destroy_subsystems 0 1 00:26:31.655 15:07:04 -- target/dif.sh@43 -- # local sub 00:26:31.655 15:07:04 -- target/dif.sh@45 -- # for sub in "$@" 00:26:31.655 15:07:04 -- target/dif.sh@46 -- # destroy_subsystem 0 00:26:31.655 15:07:04 -- target/dif.sh@36 -- # local sub_id=0 00:26:31.655 15:07:04 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:26:31.655 15:07:04 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:31.655 15:07:04 -- common/autotest_common.sh@10 -- # set +x 00:26:31.655 15:07:04 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:31.655 15:07:04 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:26:31.655 15:07:04 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:31.655 15:07:04 -- common/autotest_common.sh@10 -- # set +x 00:26:31.655 15:07:04 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:31.655 15:07:04 -- target/dif.sh@45 -- # for sub in "$@" 00:26:31.655 15:07:04 -- target/dif.sh@46 -- # destroy_subsystem 1 00:26:31.655 15:07:04 -- target/dif.sh@36 -- # local sub_id=1 00:26:31.655 15:07:04 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:26:31.655 15:07:04 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:31.655 15:07:04 -- common/autotest_common.sh@10 -- # set +x 00:26:31.655 15:07:04 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:31.655 15:07:04 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:26:31.655 15:07:04 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:31.655 15:07:04 -- common/autotest_common.sh@10 -- # set +x 00:26:31.655 15:07:04 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:31.655 ************************************ 00:26:31.655 END TEST fio_dif_rand_params 00:26:31.655 ************************************ 00:26:31.655 00:26:31.655 real 0m23.752s 00:26:31.655 user 2m8.306s 00:26:31.655 sys 0m3.519s 00:26:31.655 15:07:04 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:26:31.655 15:07:04 -- common/autotest_common.sh@10 -- # set +x 00:26:31.655 15:07:04 -- target/dif.sh@144 -- # run_test fio_dif_digest fio_dif_digest 00:26:31.655 15:07:04 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:26:31.655 15:07:04 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:26:31.655 15:07:04 -- common/autotest_common.sh@10 -- # set +x 00:26:31.655 ************************************ 00:26:31.655 START TEST fio_dif_digest 00:26:31.655 ************************************ 00:26:31.655 15:07:04 -- common/autotest_common.sh@1114 -- # fio_dif_digest 00:26:31.655 15:07:04 -- target/dif.sh@123 -- # local NULL_DIF 00:26:31.655 15:07:04 -- target/dif.sh@124 -- # local bs numjobs runtime iodepth files 00:26:31.655 15:07:04 -- target/dif.sh@125 -- # local hdgst ddgst 00:26:31.655 15:07:04 -- target/dif.sh@127 -- # NULL_DIF=3 00:26:31.655 15:07:04 -- target/dif.sh@127 -- # bs=128k,128k,128k 00:26:31.655 15:07:04 -- target/dif.sh@127 -- # numjobs=3 00:26:31.655 15:07:04 -- target/dif.sh@127 -- # iodepth=3 00:26:31.655 15:07:04 -- target/dif.sh@127 -- # runtime=10 00:26:31.655 15:07:04 -- target/dif.sh@128 -- # hdgst=true 00:26:31.655 15:07:04 -- target/dif.sh@128 -- # ddgst=true 00:26:31.655 15:07:04 -- target/dif.sh@130 -- # create_subsystems 0 00:26:31.655 15:07:04 -- target/dif.sh@28 -- # local sub 00:26:31.655 15:07:04 -- target/dif.sh@30 -- # for sub in "$@" 00:26:31.655 15:07:04 -- target/dif.sh@31 -- # create_subsystem 0 00:26:31.655 15:07:04 -- target/dif.sh@18 -- # local sub_id=0 00:26:31.655 15:07:04 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:26:31.655 15:07:04 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:31.655 15:07:04 -- common/autotest_common.sh@10 -- # set +x 00:26:31.655 bdev_null0 00:26:31.655 15:07:04 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:31.655 15:07:04 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:26:31.655 15:07:04 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:31.655 15:07:04 -- common/autotest_common.sh@10 -- # set +x 00:26:31.655 15:07:04 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:31.655 15:07:04 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:26:31.655 15:07:04 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:31.655 15:07:04 -- common/autotest_common.sh@10 -- # set +x 00:26:31.655 15:07:04 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:31.655 15:07:04 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:26:31.655 15:07:04 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:31.655 15:07:04 -- common/autotest_common.sh@10 -- # set +x 00:26:31.914 [2024-12-01 15:07:04.770418] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:31.914 15:07:04 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:31.914 15:07:04 -- target/dif.sh@131 -- # fio /dev/fd/62 00:26:31.914 15:07:04 -- target/dif.sh@131 -- # create_json_sub_conf 0 00:26:31.914 15:07:04 -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:26:31.914 15:07:04 -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:26:31.914 15:07:04 -- common/autotest_common.sh@1345 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:26:31.914 15:07:04 -- common/autotest_common.sh@1326 -- # local fio_dir=/usr/src/fio 00:26:31.914 15:07:04 -- target/dif.sh@82 -- # gen_fio_conf 00:26:31.914 15:07:04 -- common/autotest_common.sh@1328 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:26:31.914 15:07:04 -- nvmf/common.sh@520 -- # config=() 00:26:31.914 15:07:04 -- common/autotest_common.sh@1328 -- # local sanitizers 00:26:31.914 15:07:04 -- target/dif.sh@54 -- # local file 00:26:31.914 15:07:04 -- common/autotest_common.sh@1329 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:26:31.914 15:07:04 -- nvmf/common.sh@520 -- # local subsystem config 00:26:31.914 15:07:04 -- common/autotest_common.sh@1330 -- # shift 00:26:31.914 15:07:04 -- target/dif.sh@56 -- # cat 00:26:31.914 15:07:04 -- common/autotest_common.sh@1332 -- # local asan_lib= 00:26:31.914 15:07:04 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:26:31.914 15:07:04 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:26:31.914 15:07:04 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:26:31.914 { 00:26:31.914 "params": { 00:26:31.914 "name": "Nvme$subsystem", 00:26:31.914 "trtype": "$TEST_TRANSPORT", 00:26:31.914 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:31.914 "adrfam": "ipv4", 00:26:31.914 "trsvcid": "$NVMF_PORT", 00:26:31.914 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:31.914 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:31.914 "hdgst": ${hdgst:-false}, 00:26:31.914 "ddgst": ${ddgst:-false} 00:26:31.914 }, 00:26:31.914 "method": "bdev_nvme_attach_controller" 00:26:31.914 } 00:26:31.914 EOF 00:26:31.914 )") 00:26:31.914 15:07:04 -- nvmf/common.sh@542 -- # cat 00:26:31.914 15:07:04 -- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:26:31.914 15:07:04 -- common/autotest_common.sh@1334 -- # grep libasan 00:26:31.914 15:07:04 -- target/dif.sh@72 -- # (( file = 1 )) 00:26:31.914 15:07:04 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:26:31.914 15:07:04 -- target/dif.sh@72 -- # (( file <= files )) 00:26:31.914 15:07:04 -- nvmf/common.sh@544 -- # jq . 00:26:31.914 15:07:04 -- nvmf/common.sh@545 -- # IFS=, 00:26:31.914 15:07:04 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:26:31.914 "params": { 00:26:31.914 "name": "Nvme0", 00:26:31.914 "trtype": "tcp", 00:26:31.914 "traddr": "10.0.0.2", 00:26:31.914 "adrfam": "ipv4", 00:26:31.914 "trsvcid": "4420", 00:26:31.914 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:26:31.914 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:26:31.914 "hdgst": true, 00:26:31.914 "ddgst": true 00:26:31.914 }, 00:26:31.914 "method": "bdev_nvme_attach_controller" 00:26:31.914 }' 00:26:31.914 15:07:04 -- common/autotest_common.sh@1334 -- # asan_lib= 00:26:31.914 15:07:04 -- common/autotest_common.sh@1335 -- # [[ -n '' ]] 00:26:31.914 15:07:04 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:26:31.915 15:07:04 -- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:26:31.915 15:07:04 -- common/autotest_common.sh@1334 -- # grep libclang_rt.asan 00:26:31.915 15:07:04 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:26:31.915 15:07:04 -- common/autotest_common.sh@1334 -- # asan_lib= 00:26:31.915 15:07:04 -- common/autotest_common.sh@1335 -- # [[ -n '' ]] 00:26:31.915 15:07:04 -- common/autotest_common.sh@1341 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:26:31.915 15:07:04 -- common/autotest_common.sh@1341 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:26:31.915 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:26:31.915 ... 00:26:31.915 fio-3.35 00:26:31.915 Starting 3 threads 00:26:32.482 [2024-12-01 15:07:05.331087] rpc.c: 181:spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:26:32.482 [2024-12-01 15:07:05.331163] rpc.c: 90:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:26:42.468 00:26:42.468 filename0: (groupid=0, jobs=1): err= 0: pid=102962: Sun Dec 1 15:07:15 2024 00:26:42.468 read: IOPS=280, BW=35.1MiB/s (36.8MB/s)(351MiB/10004msec) 00:26:42.468 slat (nsec): min=6099, max=65744, avg=16352.88, stdev=6847.23 00:26:42.468 clat (usec): min=7519, max=53191, avg=10667.35, stdev=5241.52 00:26:42.468 lat (usec): min=7530, max=53210, avg=10683.70, stdev=5241.64 00:26:42.468 clat percentiles (usec): 00:26:42.468 | 1.00th=[ 8160], 5.00th=[ 8586], 10.00th=[ 8979], 20.00th=[ 9241], 00:26:42.468 | 30.00th=[ 9503], 40.00th=[ 9765], 50.00th=[ 9896], 60.00th=[10159], 00:26:42.468 | 70.00th=[10421], 80.00th=[10683], 90.00th=[11076], 95.00th=[11600], 00:26:42.468 | 99.00th=[50594], 99.50th=[51119], 99.90th=[52691], 99.95th=[52691], 00:26:42.468 | 99.99th=[53216] 00:26:42.468 bw ( KiB/s): min=28928, max=39680, per=36.96%, avg=35929.60, stdev=3057.68, samples=20 00:26:42.468 iops : min= 226, max= 310, avg=280.70, stdev=23.89, samples=20 00:26:42.468 lat (msec) : 10=52.07%, 20=46.26%, 50=0.39%, 100=1.28% 00:26:42.468 cpu : usr=94.67%, sys=3.80%, ctx=10, majf=0, minf=9 00:26:42.468 IO depths : 1=0.9%, 2=99.1%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:26:42.468 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:42.468 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:42.468 issued rwts: total=2808,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:42.468 latency : target=0, window=0, percentile=100.00%, depth=3 00:26:42.468 filename0: (groupid=0, jobs=1): err= 0: pid=102963: Sun Dec 1 15:07:15 2024 00:26:42.468 read: IOPS=223, BW=27.9MiB/s (29.2MB/s)(279MiB/10003msec) 00:26:42.469 slat (nsec): min=6210, max=67032, avg=14848.28, stdev=6310.02 00:26:42.469 clat (usec): min=2912, max=28909, avg=13425.29, stdev=1810.42 00:26:42.469 lat (usec): min=2922, max=28931, avg=13440.14, stdev=1811.22 00:26:42.469 clat percentiles (usec): 00:26:42.469 | 1.00th=[ 8225], 5.00th=[ 8979], 10.00th=[12256], 20.00th=[12911], 00:26:42.469 | 30.00th=[13173], 40.00th=[13435], 50.00th=[13698], 60.00th=[13829], 00:26:42.469 | 70.00th=[14091], 80.00th=[14353], 90.00th=[14746], 95.00th=[15139], 00:26:42.469 | 99.00th=[17171], 99.50th=[21627], 99.90th=[25297], 99.95th=[28705], 00:26:42.469 | 99.99th=[28967] 00:26:42.469 bw ( KiB/s): min=23808, max=31488, per=29.36%, avg=28544.00, stdev=1693.03, samples=20 00:26:42.469 iops : min= 186, max= 246, avg=223.00, stdev=13.23, samples=20 00:26:42.469 lat (msec) : 4=0.04%, 10=7.93%, 20=91.26%, 50=0.76% 00:26:42.469 cpu : usr=94.27%, sys=4.14%, ctx=111, majf=0, minf=9 00:26:42.469 IO depths : 1=3.2%, 2=96.8%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:26:42.469 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:42.469 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:42.469 issued rwts: total=2232,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:42.469 latency : target=0, window=0, percentile=100.00%, depth=3 00:26:42.469 filename0: (groupid=0, jobs=1): err= 0: pid=102964: Sun Dec 1 15:07:15 2024 00:26:42.469 read: IOPS=257, BW=32.2MiB/s (33.8MB/s)(324MiB/10044msec) 00:26:42.469 slat (nsec): min=6071, max=75597, avg=13985.31, stdev=6691.71 00:26:42.469 clat (usec): min=5890, max=52777, avg=11607.48, stdev=2099.17 00:26:42.469 lat (usec): min=5901, max=52784, avg=11621.47, stdev=2099.71 00:26:42.469 clat percentiles (usec): 00:26:42.469 | 1.00th=[ 6587], 5.00th=[ 7439], 10.00th=[ 9503], 20.00th=[10814], 00:26:42.469 | 30.00th=[11207], 40.00th=[11469], 50.00th=[11863], 60.00th=[12125], 00:26:42.469 | 70.00th=[12387], 80.00th=[12780], 90.00th=[13304], 95.00th=[13698], 00:26:42.469 | 99.00th=[15533], 99.50th=[17957], 99.90th=[25035], 99.95th=[49021], 00:26:42.469 | 99.99th=[52691] 00:26:42.469 bw ( KiB/s): min=27904, max=36864, per=34.05%, avg=33103.80, stdev=2137.78, samples=20 00:26:42.469 iops : min= 218, max= 288, avg=258.60, stdev=16.73, samples=20 00:26:42.469 lat (msec) : 10=11.13%, 20=88.68%, 50=0.15%, 100=0.04% 00:26:42.469 cpu : usr=93.85%, sys=4.56%, ctx=90, majf=0, minf=9 00:26:42.469 IO depths : 1=1.0%, 2=99.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:26:42.469 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:42.469 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:42.469 issued rwts: total=2588,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:42.469 latency : target=0, window=0, percentile=100.00%, depth=3 00:26:42.469 00:26:42.469 Run status group 0 (all jobs): 00:26:42.469 READ: bw=94.9MiB/s (99.5MB/s), 27.9MiB/s-35.1MiB/s (29.2MB/s-36.8MB/s), io=954MiB (1000MB), run=10003-10044msec 00:26:42.747 15:07:15 -- target/dif.sh@132 -- # destroy_subsystems 0 00:26:42.747 15:07:15 -- target/dif.sh@43 -- # local sub 00:26:42.747 15:07:15 -- target/dif.sh@45 -- # for sub in "$@" 00:26:42.747 15:07:15 -- target/dif.sh@46 -- # destroy_subsystem 0 00:26:42.747 15:07:15 -- target/dif.sh@36 -- # local sub_id=0 00:26:42.747 15:07:15 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:26:42.747 15:07:15 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:42.747 15:07:15 -- common/autotest_common.sh@10 -- # set +x 00:26:42.747 15:07:15 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:42.747 15:07:15 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:26:42.747 15:07:15 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:42.747 15:07:15 -- common/autotest_common.sh@10 -- # set +x 00:26:42.747 ************************************ 00:26:42.747 END TEST fio_dif_digest 00:26:42.747 ************************************ 00:26:42.747 15:07:15 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:42.747 00:26:42.747 real 0m11.064s 00:26:42.747 user 0m29.051s 00:26:42.747 sys 0m1.502s 00:26:42.747 15:07:15 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:26:42.747 15:07:15 -- common/autotest_common.sh@10 -- # set +x 00:26:42.747 15:07:15 -- target/dif.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:26:42.747 15:07:15 -- target/dif.sh@147 -- # nvmftestfini 00:26:42.747 15:07:15 -- nvmf/common.sh@476 -- # nvmfcleanup 00:26:42.747 15:07:15 -- nvmf/common.sh@116 -- # sync 00:26:43.011 15:07:15 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:26:43.011 15:07:15 -- nvmf/common.sh@119 -- # set +e 00:26:43.011 15:07:15 -- nvmf/common.sh@120 -- # for i in {1..20} 00:26:43.011 15:07:15 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:26:43.011 rmmod nvme_tcp 00:26:43.011 rmmod nvme_fabrics 00:26:43.011 rmmod nvme_keyring 00:26:43.011 15:07:15 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:26:43.011 15:07:15 -- nvmf/common.sh@123 -- # set -e 00:26:43.011 15:07:15 -- nvmf/common.sh@124 -- # return 0 00:26:43.011 15:07:15 -- nvmf/common.sh@477 -- # '[' -n 102191 ']' 00:26:43.011 15:07:15 -- nvmf/common.sh@478 -- # killprocess 102191 00:26:43.011 15:07:15 -- common/autotest_common.sh@936 -- # '[' -z 102191 ']' 00:26:43.011 15:07:15 -- common/autotest_common.sh@940 -- # kill -0 102191 00:26:43.011 15:07:15 -- common/autotest_common.sh@941 -- # uname 00:26:43.011 15:07:15 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:26:43.011 15:07:15 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 102191 00:26:43.011 killing process with pid 102191 00:26:43.011 15:07:15 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:26:43.011 15:07:15 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:26:43.011 15:07:15 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 102191' 00:26:43.011 15:07:15 -- common/autotest_common.sh@955 -- # kill 102191 00:26:43.011 15:07:15 -- common/autotest_common.sh@960 -- # wait 102191 00:26:43.270 15:07:16 -- nvmf/common.sh@480 -- # '[' iso == iso ']' 00:26:43.270 15:07:16 -- nvmf/common.sh@481 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:26:43.529 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:26:43.529 Waiting for block devices as requested 00:26:43.529 0000:00:06.0 (1b36 0010): uio_pci_generic -> nvme 00:26:43.787 0000:00:07.0 (1b36 0010): uio_pci_generic -> nvme 00:26:43.787 15:07:16 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:26:43.787 15:07:16 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:26:43.787 15:07:16 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:26:43.787 15:07:16 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:26:43.787 15:07:16 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:43.787 15:07:16 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:26:43.787 15:07:16 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:43.787 15:07:16 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:26:43.787 00:26:43.787 real 1m0.319s 00:26:43.787 user 3m52.612s 00:26:43.787 sys 0m14.097s 00:26:43.787 15:07:16 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:26:43.787 ************************************ 00:26:43.787 END TEST nvmf_dif 00:26:43.787 ************************************ 00:26:43.787 15:07:16 -- common/autotest_common.sh@10 -- # set +x 00:26:43.788 15:07:16 -- spdk/autotest.sh@288 -- # run_test nvmf_abort_qd_sizes /home/vagrant/spdk_repo/spdk/test/nvmf/target/abort_qd_sizes.sh 00:26:43.788 15:07:16 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:26:43.788 15:07:16 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:26:43.788 15:07:16 -- common/autotest_common.sh@10 -- # set +x 00:26:43.788 ************************************ 00:26:43.788 START TEST nvmf_abort_qd_sizes 00:26:43.788 ************************************ 00:26:43.788 15:07:16 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/abort_qd_sizes.sh 00:26:44.046 * Looking for test storage... 00:26:44.046 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:26:44.046 15:07:16 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:26:44.046 15:07:16 -- common/autotest_common.sh@1690 -- # lcov --version 00:26:44.046 15:07:16 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:26:44.046 15:07:17 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:26:44.046 15:07:17 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:26:44.046 15:07:17 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:26:44.046 15:07:17 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:26:44.046 15:07:17 -- scripts/common.sh@335 -- # IFS=.-: 00:26:44.046 15:07:17 -- scripts/common.sh@335 -- # read -ra ver1 00:26:44.046 15:07:17 -- scripts/common.sh@336 -- # IFS=.-: 00:26:44.046 15:07:17 -- scripts/common.sh@336 -- # read -ra ver2 00:26:44.046 15:07:17 -- scripts/common.sh@337 -- # local 'op=<' 00:26:44.046 15:07:17 -- scripts/common.sh@339 -- # ver1_l=2 00:26:44.046 15:07:17 -- scripts/common.sh@340 -- # ver2_l=1 00:26:44.046 15:07:17 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:26:44.046 15:07:17 -- scripts/common.sh@343 -- # case "$op" in 00:26:44.046 15:07:17 -- scripts/common.sh@344 -- # : 1 00:26:44.046 15:07:17 -- scripts/common.sh@363 -- # (( v = 0 )) 00:26:44.046 15:07:17 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:26:44.046 15:07:17 -- scripts/common.sh@364 -- # decimal 1 00:26:44.046 15:07:17 -- scripts/common.sh@352 -- # local d=1 00:26:44.046 15:07:17 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:26:44.046 15:07:17 -- scripts/common.sh@354 -- # echo 1 00:26:44.046 15:07:17 -- scripts/common.sh@364 -- # ver1[v]=1 00:26:44.046 15:07:17 -- scripts/common.sh@365 -- # decimal 2 00:26:44.046 15:07:17 -- scripts/common.sh@352 -- # local d=2 00:26:44.046 15:07:17 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:26:44.046 15:07:17 -- scripts/common.sh@354 -- # echo 2 00:26:44.046 15:07:17 -- scripts/common.sh@365 -- # ver2[v]=2 00:26:44.046 15:07:17 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:26:44.046 15:07:17 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:26:44.046 15:07:17 -- scripts/common.sh@367 -- # return 0 00:26:44.046 15:07:17 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:26:44.046 15:07:17 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:26:44.047 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:44.047 --rc genhtml_branch_coverage=1 00:26:44.047 --rc genhtml_function_coverage=1 00:26:44.047 --rc genhtml_legend=1 00:26:44.047 --rc geninfo_all_blocks=1 00:26:44.047 --rc geninfo_unexecuted_blocks=1 00:26:44.047 00:26:44.047 ' 00:26:44.047 15:07:17 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:26:44.047 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:44.047 --rc genhtml_branch_coverage=1 00:26:44.047 --rc genhtml_function_coverage=1 00:26:44.047 --rc genhtml_legend=1 00:26:44.047 --rc geninfo_all_blocks=1 00:26:44.047 --rc geninfo_unexecuted_blocks=1 00:26:44.047 00:26:44.047 ' 00:26:44.047 15:07:17 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:26:44.047 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:44.047 --rc genhtml_branch_coverage=1 00:26:44.047 --rc genhtml_function_coverage=1 00:26:44.047 --rc genhtml_legend=1 00:26:44.047 --rc geninfo_all_blocks=1 00:26:44.047 --rc geninfo_unexecuted_blocks=1 00:26:44.047 00:26:44.047 ' 00:26:44.047 15:07:17 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:26:44.047 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:44.047 --rc genhtml_branch_coverage=1 00:26:44.047 --rc genhtml_function_coverage=1 00:26:44.047 --rc genhtml_legend=1 00:26:44.047 --rc geninfo_all_blocks=1 00:26:44.047 --rc geninfo_unexecuted_blocks=1 00:26:44.047 00:26:44.047 ' 00:26:44.047 15:07:17 -- target/abort_qd_sizes.sh@14 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:26:44.047 15:07:17 -- nvmf/common.sh@7 -- # uname -s 00:26:44.047 15:07:17 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:44.047 15:07:17 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:44.047 15:07:17 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:44.047 15:07:17 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:44.047 15:07:17 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:44.047 15:07:17 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:44.047 15:07:17 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:44.047 15:07:17 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:44.047 15:07:17 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:44.047 15:07:17 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:44.047 15:07:17 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:2d843004-a791-47f3-8dd7-3d04462c368b 00:26:44.047 15:07:17 -- nvmf/common.sh@18 -- # NVME_HOSTID=2d843004-a791-47f3-8dd7-3d04462c368b 00:26:44.047 15:07:17 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:44.047 15:07:17 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:44.047 15:07:17 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:26:44.047 15:07:17 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:26:44.047 15:07:17 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:44.047 15:07:17 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:44.047 15:07:17 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:44.047 15:07:17 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:44.047 15:07:17 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:44.047 15:07:17 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:44.047 15:07:17 -- paths/export.sh@5 -- # export PATH 00:26:44.047 15:07:17 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:44.047 15:07:17 -- nvmf/common.sh@46 -- # : 0 00:26:44.047 15:07:17 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:26:44.047 15:07:17 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:26:44.047 15:07:17 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:26:44.047 15:07:17 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:44.047 15:07:17 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:44.047 15:07:17 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:26:44.047 15:07:17 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:26:44.047 15:07:17 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:26:44.047 15:07:17 -- target/abort_qd_sizes.sh@73 -- # nvmftestinit 00:26:44.047 15:07:17 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:26:44.047 15:07:17 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:44.047 15:07:17 -- nvmf/common.sh@436 -- # prepare_net_devs 00:26:44.047 15:07:17 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:26:44.047 15:07:17 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:26:44.047 15:07:17 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:44.047 15:07:17 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:26:44.047 15:07:17 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:44.047 15:07:17 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:26:44.047 15:07:17 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:26:44.047 15:07:17 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:26:44.047 15:07:17 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:26:44.047 15:07:17 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:26:44.047 15:07:17 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:26:44.047 15:07:17 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:44.047 15:07:17 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:44.047 15:07:17 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:26:44.047 15:07:17 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:26:44.047 15:07:17 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:26:44.047 15:07:17 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:26:44.047 15:07:17 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:26:44.047 15:07:17 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:44.047 15:07:17 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:26:44.047 15:07:17 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:26:44.047 15:07:17 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:26:44.047 15:07:17 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:26:44.047 15:07:17 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:26:44.047 15:07:17 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:26:44.047 Cannot find device "nvmf_tgt_br" 00:26:44.047 15:07:17 -- nvmf/common.sh@154 -- # true 00:26:44.047 15:07:17 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:26:44.047 Cannot find device "nvmf_tgt_br2" 00:26:44.047 15:07:17 -- nvmf/common.sh@155 -- # true 00:26:44.047 15:07:17 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:26:44.047 15:07:17 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:26:44.047 Cannot find device "nvmf_tgt_br" 00:26:44.047 15:07:17 -- nvmf/common.sh@157 -- # true 00:26:44.047 15:07:17 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:26:44.047 Cannot find device "nvmf_tgt_br2" 00:26:44.047 15:07:17 -- nvmf/common.sh@158 -- # true 00:26:44.047 15:07:17 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:26:44.306 15:07:17 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:26:44.306 15:07:17 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:26:44.306 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:26:44.306 15:07:17 -- nvmf/common.sh@161 -- # true 00:26:44.306 15:07:17 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:26:44.306 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:26:44.306 15:07:17 -- nvmf/common.sh@162 -- # true 00:26:44.306 15:07:17 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:26:44.306 15:07:17 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:26:44.306 15:07:17 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:26:44.306 15:07:17 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:26:44.306 15:07:17 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:26:44.306 15:07:17 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:26:44.306 15:07:17 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:26:44.306 15:07:17 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:26:44.306 15:07:17 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:26:44.306 15:07:17 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:26:44.306 15:07:17 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:26:44.306 15:07:17 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:26:44.306 15:07:17 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:26:44.306 15:07:17 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:26:44.306 15:07:17 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:26:44.306 15:07:17 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:26:44.306 15:07:17 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:26:44.306 15:07:17 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:26:44.306 15:07:17 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:26:44.306 15:07:17 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:26:44.306 15:07:17 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:26:44.306 15:07:17 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:26:44.306 15:07:17 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:26:44.306 15:07:17 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:26:44.306 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:44.306 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.059 ms 00:26:44.306 00:26:44.306 --- 10.0.0.2 ping statistics --- 00:26:44.306 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:44.306 rtt min/avg/max/mdev = 0.059/0.059/0.059/0.000 ms 00:26:44.306 15:07:17 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:26:44.306 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:26:44.306 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.044 ms 00:26:44.306 00:26:44.306 --- 10.0.0.3 ping statistics --- 00:26:44.306 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:44.306 rtt min/avg/max/mdev = 0.044/0.044/0.044/0.000 ms 00:26:44.306 15:07:17 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:26:44.306 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:44.306 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.026 ms 00:26:44.306 00:26:44.306 --- 10.0.0.1 ping statistics --- 00:26:44.306 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:44.306 rtt min/avg/max/mdev = 0.026/0.026/0.026/0.000 ms 00:26:44.306 15:07:17 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:44.306 15:07:17 -- nvmf/common.sh@421 -- # return 0 00:26:44.306 15:07:17 -- nvmf/common.sh@438 -- # '[' iso == iso ']' 00:26:44.306 15:07:17 -- nvmf/common.sh@439 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:26:45.239 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:26:45.239 0000:00:06.0 (1b36 0010): nvme -> uio_pci_generic 00:26:45.239 0000:00:07.0 (1b36 0010): nvme -> uio_pci_generic 00:26:45.239 15:07:18 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:45.239 15:07:18 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:26:45.239 15:07:18 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:26:45.239 15:07:18 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:45.239 15:07:18 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:26:45.239 15:07:18 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:26:45.239 15:07:18 -- target/abort_qd_sizes.sh@74 -- # nvmfappstart -m 0xf 00:26:45.239 15:07:18 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:26:45.239 15:07:18 -- common/autotest_common.sh@722 -- # xtrace_disable 00:26:45.239 15:07:18 -- common/autotest_common.sh@10 -- # set +x 00:26:45.239 15:07:18 -- nvmf/common.sh@469 -- # nvmfpid=103562 00:26:45.239 15:07:18 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xf 00:26:45.239 15:07:18 -- nvmf/common.sh@470 -- # waitforlisten 103562 00:26:45.239 15:07:18 -- common/autotest_common.sh@829 -- # '[' -z 103562 ']' 00:26:45.239 15:07:18 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:45.239 15:07:18 -- common/autotest_common.sh@834 -- # local max_retries=100 00:26:45.239 15:07:18 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:45.239 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:45.239 15:07:18 -- common/autotest_common.sh@838 -- # xtrace_disable 00:26:45.239 15:07:18 -- common/autotest_common.sh@10 -- # set +x 00:26:45.498 [2024-12-01 15:07:18.399037] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:26:45.498 [2024-12-01 15:07:18.399141] [ DPDK EAL parameters: nvmf -c 0xf --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:45.498 [2024-12-01 15:07:18.543419] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:26:45.756 [2024-12-01 15:07:18.623614] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:26:45.756 [2024-12-01 15:07:18.624343] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:45.756 [2024-12-01 15:07:18.624671] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:45.756 [2024-12-01 15:07:18.625014] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:45.756 [2024-12-01 15:07:18.625443] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:26:45.756 [2024-12-01 15:07:18.625680] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:26:45.756 [2024-12-01 15:07:18.625684] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:45.756 [2024-12-01 15:07:18.625599] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:26:46.322 15:07:19 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:26:46.322 15:07:19 -- common/autotest_common.sh@862 -- # return 0 00:26:46.322 15:07:19 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:26:46.322 15:07:19 -- common/autotest_common.sh@728 -- # xtrace_disable 00:26:46.322 15:07:19 -- common/autotest_common.sh@10 -- # set +x 00:26:46.581 15:07:19 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:46.581 15:07:19 -- target/abort_qd_sizes.sh@76 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini || :; clean_kernel_target' SIGINT SIGTERM EXIT 00:26:46.581 15:07:19 -- target/abort_qd_sizes.sh@78 -- # mapfile -t nvmes 00:26:46.581 15:07:19 -- target/abort_qd_sizes.sh@78 -- # nvme_in_userspace 00:26:46.581 15:07:19 -- scripts/common.sh@311 -- # local bdf bdfs 00:26:46.581 15:07:19 -- scripts/common.sh@312 -- # local nvmes 00:26:46.581 15:07:19 -- scripts/common.sh@314 -- # [[ -n '' ]] 00:26:46.581 15:07:19 -- scripts/common.sh@317 -- # nvmes=($(iter_pci_class_code 01 08 02)) 00:26:46.581 15:07:19 -- scripts/common.sh@317 -- # iter_pci_class_code 01 08 02 00:26:46.581 15:07:19 -- scripts/common.sh@297 -- # local bdf= 00:26:46.581 15:07:19 -- scripts/common.sh@299 -- # iter_all_pci_class_code 01 08 02 00:26:46.581 15:07:19 -- scripts/common.sh@232 -- # local class 00:26:46.581 15:07:19 -- scripts/common.sh@233 -- # local subclass 00:26:46.581 15:07:19 -- scripts/common.sh@234 -- # local progif 00:26:46.581 15:07:19 -- scripts/common.sh@235 -- # printf %02x 1 00:26:46.581 15:07:19 -- scripts/common.sh@235 -- # class=01 00:26:46.581 15:07:19 -- scripts/common.sh@236 -- # printf %02x 8 00:26:46.581 15:07:19 -- scripts/common.sh@236 -- # subclass=08 00:26:46.581 15:07:19 -- scripts/common.sh@237 -- # printf %02x 2 00:26:46.581 15:07:19 -- scripts/common.sh@237 -- # progif=02 00:26:46.581 15:07:19 -- scripts/common.sh@239 -- # hash lspci 00:26:46.581 15:07:19 -- scripts/common.sh@240 -- # '[' 02 '!=' 00 ']' 00:26:46.581 15:07:19 -- scripts/common.sh@241 -- # lspci -mm -n -D 00:26:46.581 15:07:19 -- scripts/common.sh@242 -- # grep -i -- -p02 00:26:46.581 15:07:19 -- scripts/common.sh@243 -- # awk -v 'cc="0108"' -F ' ' '{if (cc ~ $2) print $1}' 00:26:46.581 15:07:19 -- scripts/common.sh@244 -- # tr -d '"' 00:26:46.581 15:07:19 -- scripts/common.sh@299 -- # for bdf in $(iter_all_pci_class_code "$@") 00:26:46.581 15:07:19 -- scripts/common.sh@300 -- # pci_can_use 0000:00:06.0 00:26:46.581 15:07:19 -- scripts/common.sh@15 -- # local i 00:26:46.581 15:07:19 -- scripts/common.sh@18 -- # [[ =~ 0000:00:06.0 ]] 00:26:46.581 15:07:19 -- scripts/common.sh@22 -- # [[ -z '' ]] 00:26:46.581 15:07:19 -- scripts/common.sh@24 -- # return 0 00:26:46.581 15:07:19 -- scripts/common.sh@301 -- # echo 0000:00:06.0 00:26:46.581 15:07:19 -- scripts/common.sh@299 -- # for bdf in $(iter_all_pci_class_code "$@") 00:26:46.581 15:07:19 -- scripts/common.sh@300 -- # pci_can_use 0000:00:07.0 00:26:46.581 15:07:19 -- scripts/common.sh@15 -- # local i 00:26:46.581 15:07:19 -- scripts/common.sh@18 -- # [[ =~ 0000:00:07.0 ]] 00:26:46.581 15:07:19 -- scripts/common.sh@22 -- # [[ -z '' ]] 00:26:46.581 15:07:19 -- scripts/common.sh@24 -- # return 0 00:26:46.581 15:07:19 -- scripts/common.sh@301 -- # echo 0000:00:07.0 00:26:46.581 15:07:19 -- scripts/common.sh@320 -- # for bdf in "${nvmes[@]}" 00:26:46.581 15:07:19 -- scripts/common.sh@321 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:06.0 ]] 00:26:46.581 15:07:19 -- scripts/common.sh@322 -- # uname -s 00:26:46.581 15:07:19 -- scripts/common.sh@322 -- # [[ Linux == FreeBSD ]] 00:26:46.581 15:07:19 -- scripts/common.sh@325 -- # bdfs+=("$bdf") 00:26:46.581 15:07:19 -- scripts/common.sh@320 -- # for bdf in "${nvmes[@]}" 00:26:46.581 15:07:19 -- scripts/common.sh@321 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:07.0 ]] 00:26:46.581 15:07:19 -- scripts/common.sh@322 -- # uname -s 00:26:46.581 15:07:19 -- scripts/common.sh@322 -- # [[ Linux == FreeBSD ]] 00:26:46.581 15:07:19 -- scripts/common.sh@325 -- # bdfs+=("$bdf") 00:26:46.581 15:07:19 -- scripts/common.sh@327 -- # (( 2 )) 00:26:46.581 15:07:19 -- scripts/common.sh@328 -- # printf '%s\n' 0000:00:06.0 0000:00:07.0 00:26:46.581 15:07:19 -- target/abort_qd_sizes.sh@79 -- # (( 2 > 0 )) 00:26:46.581 15:07:19 -- target/abort_qd_sizes.sh@81 -- # nvme=0000:00:06.0 00:26:46.581 15:07:19 -- target/abort_qd_sizes.sh@83 -- # run_test spdk_target_abort spdk_target 00:26:46.581 15:07:19 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:26:46.581 15:07:19 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:26:46.581 15:07:19 -- common/autotest_common.sh@10 -- # set +x 00:26:46.581 ************************************ 00:26:46.581 START TEST spdk_target_abort 00:26:46.581 ************************************ 00:26:46.581 15:07:19 -- common/autotest_common.sh@1114 -- # spdk_target 00:26:46.581 15:07:19 -- target/abort_qd_sizes.sh@43 -- # local name=spdk_target 00:26:46.581 15:07:19 -- target/abort_qd_sizes.sh@44 -- # local subnqn=nqn.2016-06.io.spdk:spdk_target 00:26:46.581 15:07:19 -- target/abort_qd_sizes.sh@46 -- # rpc_cmd bdev_nvme_attach_controller -t pcie -a 0000:00:06.0 -b spdk_target 00:26:46.581 15:07:19 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:46.581 15:07:19 -- common/autotest_common.sh@10 -- # set +x 00:26:46.581 spdk_targetn1 00:26:46.581 15:07:19 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:46.581 15:07:19 -- target/abort_qd_sizes.sh@48 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:26:46.581 15:07:19 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:46.581 15:07:19 -- common/autotest_common.sh@10 -- # set +x 00:26:46.581 [2024-12-01 15:07:19.604001] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:46.581 15:07:19 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:46.581 15:07:19 -- target/abort_qd_sizes.sh@49 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:spdk_target -a -s SPDKISFASTANDAWESOME 00:26:46.581 15:07:19 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:46.581 15:07:19 -- common/autotest_common.sh@10 -- # set +x 00:26:46.581 15:07:19 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:46.581 15:07:19 -- target/abort_qd_sizes.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:spdk_target spdk_targetn1 00:26:46.581 15:07:19 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:46.581 15:07:19 -- common/autotest_common.sh@10 -- # set +x 00:26:46.581 15:07:19 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:46.581 15:07:19 -- target/abort_qd_sizes.sh@51 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:spdk_target -t tcp -a 10.0.0.2 -s 4420 00:26:46.581 15:07:19 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:46.581 15:07:19 -- common/autotest_common.sh@10 -- # set +x 00:26:46.581 [2024-12-01 15:07:19.644845] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:46.581 15:07:19 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:46.581 15:07:19 -- target/abort_qd_sizes.sh@53 -- # rabort tcp IPv4 10.0.0.2 4420 nqn.2016-06.io.spdk:spdk_target 00:26:46.581 15:07:19 -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:26:46.581 15:07:19 -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:26:46.581 15:07:19 -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.2 00:26:46.581 15:07:19 -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:26:46.581 15:07:19 -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:spdk_target 00:26:46.581 15:07:19 -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:26:46.581 15:07:19 -- target/abort_qd_sizes.sh@24 -- # local target r 00:26:46.581 15:07:19 -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:26:46.581 15:07:19 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:26:46.581 15:07:19 -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:26:46.581 15:07:19 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:26:46.581 15:07:19 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:26:46.581 15:07:19 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:26:46.581 15:07:19 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2' 00:26:46.581 15:07:19 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:26:46.581 15:07:19 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:26:46.581 15:07:19 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:26:46.581 15:07:19 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:spdk_target' 00:26:46.581 15:07:19 -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:26:46.581 15:07:19 -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:spdk_target' 00:26:49.866 Initializing NVMe Controllers 00:26:49.866 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:spdk_target 00:26:49.866 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:spdk_target) NSID 1 with lcore 0 00:26:49.866 Initialization complete. Launching workers. 00:26:49.866 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:spdk_target) NSID 1 I/O completed: 11381, failed: 0 00:26:49.866 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:spdk_target) abort submitted 1150, failed to submit 10231 00:26:49.866 success 774, unsuccess 376, failed 0 00:26:49.866 15:07:22 -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:26:49.866 15:07:22 -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:spdk_target' 00:26:53.155 Initializing NVMe Controllers 00:26:53.155 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:spdk_target 00:26:53.155 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:spdk_target) NSID 1 with lcore 0 00:26:53.155 Initialization complete. Launching workers. 00:26:53.155 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:spdk_target) NSID 1 I/O completed: 5915, failed: 0 00:26:53.155 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:spdk_target) abort submitted 1229, failed to submit 4686 00:26:53.155 success 289, unsuccess 940, failed 0 00:26:53.155 15:07:26 -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:26:53.155 15:07:26 -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:spdk_target' 00:26:56.448 Initializing NVMe Controllers 00:26:56.448 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:spdk_target 00:26:56.448 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:spdk_target) NSID 1 with lcore 0 00:26:56.448 Initialization complete. Launching workers. 00:26:56.448 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:spdk_target) NSID 1 I/O completed: 31533, failed: 0 00:26:56.448 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:spdk_target) abort submitted 2623, failed to submit 28910 00:26:56.448 success 501, unsuccess 2122, failed 0 00:26:56.448 15:07:29 -- target/abort_qd_sizes.sh@55 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:spdk_target 00:26:56.448 15:07:29 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:56.448 15:07:29 -- common/autotest_common.sh@10 -- # set +x 00:26:56.448 15:07:29 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:56.448 15:07:29 -- target/abort_qd_sizes.sh@56 -- # rpc_cmd bdev_nvme_detach_controller spdk_target 00:26:56.448 15:07:29 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:56.448 15:07:29 -- common/autotest_common.sh@10 -- # set +x 00:26:57.014 15:07:29 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:57.014 15:07:29 -- target/abort_qd_sizes.sh@62 -- # killprocess 103562 00:26:57.014 15:07:29 -- common/autotest_common.sh@936 -- # '[' -z 103562 ']' 00:26:57.014 15:07:29 -- common/autotest_common.sh@940 -- # kill -0 103562 00:26:57.014 15:07:29 -- common/autotest_common.sh@941 -- # uname 00:26:57.014 15:07:29 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:26:57.014 15:07:29 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 103562 00:26:57.014 killing process with pid 103562 00:26:57.014 15:07:29 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:26:57.014 15:07:29 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:26:57.014 15:07:29 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 103562' 00:26:57.014 15:07:29 -- common/autotest_common.sh@955 -- # kill 103562 00:26:57.014 15:07:29 -- common/autotest_common.sh@960 -- # wait 103562 00:26:57.273 00:26:57.273 real 0m10.653s 00:26:57.273 user 0m43.795s 00:26:57.273 sys 0m1.687s 00:26:57.273 15:07:30 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:26:57.273 15:07:30 -- common/autotest_common.sh@10 -- # set +x 00:26:57.273 ************************************ 00:26:57.273 END TEST spdk_target_abort 00:26:57.273 ************************************ 00:26:57.273 15:07:30 -- target/abort_qd_sizes.sh@84 -- # run_test kernel_target_abort kernel_target 00:26:57.273 15:07:30 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:26:57.273 15:07:30 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:26:57.273 15:07:30 -- common/autotest_common.sh@10 -- # set +x 00:26:57.273 ************************************ 00:26:57.273 START TEST kernel_target_abort 00:26:57.273 ************************************ 00:26:57.273 15:07:30 -- common/autotest_common.sh@1114 -- # kernel_target 00:26:57.273 15:07:30 -- target/abort_qd_sizes.sh@66 -- # local name=kernel_target 00:26:57.273 15:07:30 -- target/abort_qd_sizes.sh@68 -- # configure_kernel_target kernel_target 00:26:57.273 15:07:30 -- nvmf/common.sh@621 -- # kernel_name=kernel_target 00:26:57.273 15:07:30 -- nvmf/common.sh@622 -- # nvmet=/sys/kernel/config/nvmet 00:26:57.273 15:07:30 -- nvmf/common.sh@623 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/kernel_target 00:26:57.273 15:07:30 -- nvmf/common.sh@624 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/kernel_target/namespaces/1 00:26:57.273 15:07:30 -- nvmf/common.sh@625 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:26:57.273 15:07:30 -- nvmf/common.sh@627 -- # local block nvme 00:26:57.273 15:07:30 -- nvmf/common.sh@629 -- # [[ ! -e /sys/module/nvmet ]] 00:26:57.273 15:07:30 -- nvmf/common.sh@630 -- # modprobe nvmet 00:26:57.273 15:07:30 -- nvmf/common.sh@633 -- # [[ -e /sys/kernel/config/nvmet ]] 00:26:57.273 15:07:30 -- nvmf/common.sh@635 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:26:57.530 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:26:57.530 Waiting for block devices as requested 00:26:57.787 0000:00:06.0 (1b36 0010): uio_pci_generic -> nvme 00:26:57.787 0000:00:07.0 (1b36 0010): uio_pci_generic -> nvme 00:26:57.787 15:07:30 -- nvmf/common.sh@638 -- # for block in /sys/block/nvme* 00:26:57.787 15:07:30 -- nvmf/common.sh@639 -- # [[ -e /sys/block/nvme0n1 ]] 00:26:57.787 15:07:30 -- nvmf/common.sh@640 -- # block_in_use nvme0n1 00:26:57.787 15:07:30 -- scripts/common.sh@380 -- # local block=nvme0n1 pt 00:26:57.787 15:07:30 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1 00:26:57.787 No valid GPT data, bailing 00:26:57.787 15:07:30 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:26:57.787 15:07:30 -- scripts/common.sh@393 -- # pt= 00:26:57.787 15:07:30 -- scripts/common.sh@394 -- # return 1 00:26:57.788 15:07:30 -- nvmf/common.sh@640 -- # nvme=/dev/nvme0n1 00:26:57.788 15:07:30 -- nvmf/common.sh@638 -- # for block in /sys/block/nvme* 00:26:57.788 15:07:30 -- nvmf/common.sh@639 -- # [[ -e /sys/block/nvme1n1 ]] 00:26:57.788 15:07:30 -- nvmf/common.sh@640 -- # block_in_use nvme1n1 00:26:57.788 15:07:30 -- scripts/common.sh@380 -- # local block=nvme1n1 pt 00:26:57.788 15:07:30 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n1 00:26:57.788 No valid GPT data, bailing 00:26:58.045 15:07:30 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:26:58.045 15:07:30 -- scripts/common.sh@393 -- # pt= 00:26:58.045 15:07:30 -- scripts/common.sh@394 -- # return 1 00:26:58.045 15:07:30 -- nvmf/common.sh@640 -- # nvme=/dev/nvme1n1 00:26:58.045 15:07:30 -- nvmf/common.sh@638 -- # for block in /sys/block/nvme* 00:26:58.045 15:07:30 -- nvmf/common.sh@639 -- # [[ -e /sys/block/nvme1n2 ]] 00:26:58.045 15:07:30 -- nvmf/common.sh@640 -- # block_in_use nvme1n2 00:26:58.045 15:07:30 -- scripts/common.sh@380 -- # local block=nvme1n2 pt 00:26:58.045 15:07:30 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n2 00:26:58.045 No valid GPT data, bailing 00:26:58.045 15:07:30 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme1n2 00:26:58.045 15:07:30 -- scripts/common.sh@393 -- # pt= 00:26:58.045 15:07:30 -- scripts/common.sh@394 -- # return 1 00:26:58.046 15:07:30 -- nvmf/common.sh@640 -- # nvme=/dev/nvme1n2 00:26:58.046 15:07:30 -- nvmf/common.sh@638 -- # for block in /sys/block/nvme* 00:26:58.046 15:07:30 -- nvmf/common.sh@639 -- # [[ -e /sys/block/nvme1n3 ]] 00:26:58.046 15:07:30 -- nvmf/common.sh@640 -- # block_in_use nvme1n3 00:26:58.046 15:07:30 -- scripts/common.sh@380 -- # local block=nvme1n3 pt 00:26:58.046 15:07:30 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n3 00:26:58.046 No valid GPT data, bailing 00:26:58.046 15:07:31 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme1n3 00:26:58.046 15:07:31 -- scripts/common.sh@393 -- # pt= 00:26:58.046 15:07:31 -- scripts/common.sh@394 -- # return 1 00:26:58.046 15:07:31 -- nvmf/common.sh@640 -- # nvme=/dev/nvme1n3 00:26:58.046 15:07:31 -- nvmf/common.sh@643 -- # [[ -b /dev/nvme1n3 ]] 00:26:58.046 15:07:31 -- nvmf/common.sh@645 -- # mkdir /sys/kernel/config/nvmet/subsystems/kernel_target 00:26:58.046 15:07:31 -- nvmf/common.sh@646 -- # mkdir /sys/kernel/config/nvmet/subsystems/kernel_target/namespaces/1 00:26:58.046 15:07:31 -- nvmf/common.sh@647 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:26:58.046 15:07:31 -- nvmf/common.sh@652 -- # echo SPDK-kernel_target 00:26:58.046 15:07:31 -- nvmf/common.sh@654 -- # echo 1 00:26:58.046 15:07:31 -- nvmf/common.sh@655 -- # echo /dev/nvme1n3 00:26:58.046 15:07:31 -- nvmf/common.sh@656 -- # echo 1 00:26:58.046 15:07:31 -- nvmf/common.sh@662 -- # echo 10.0.0.1 00:26:58.046 15:07:31 -- nvmf/common.sh@663 -- # echo tcp 00:26:58.046 15:07:31 -- nvmf/common.sh@664 -- # echo 4420 00:26:58.046 15:07:31 -- nvmf/common.sh@665 -- # echo ipv4 00:26:58.046 15:07:31 -- nvmf/common.sh@668 -- # ln -s /sys/kernel/config/nvmet/subsystems/kernel_target /sys/kernel/config/nvmet/ports/1/subsystems/ 00:26:58.046 15:07:31 -- nvmf/common.sh@671 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:2d843004-a791-47f3-8dd7-3d04462c368b --hostid=2d843004-a791-47f3-8dd7-3d04462c368b -a 10.0.0.1 -t tcp -s 4420 00:26:58.046 00:26:58.046 Discovery Log Number of Records 2, Generation counter 2 00:26:58.046 =====Discovery Log Entry 0====== 00:26:58.046 trtype: tcp 00:26:58.046 adrfam: ipv4 00:26:58.046 subtype: current discovery subsystem 00:26:58.046 treq: not specified, sq flow control disable supported 00:26:58.046 portid: 1 00:26:58.046 trsvcid: 4420 00:26:58.046 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:26:58.046 traddr: 10.0.0.1 00:26:58.046 eflags: none 00:26:58.046 sectype: none 00:26:58.046 =====Discovery Log Entry 1====== 00:26:58.046 trtype: tcp 00:26:58.046 adrfam: ipv4 00:26:58.046 subtype: nvme subsystem 00:26:58.046 treq: not specified, sq flow control disable supported 00:26:58.046 portid: 1 00:26:58.046 trsvcid: 4420 00:26:58.046 subnqn: kernel_target 00:26:58.046 traddr: 10.0.0.1 00:26:58.046 eflags: none 00:26:58.046 sectype: none 00:26:58.046 15:07:31 -- target/abort_qd_sizes.sh@69 -- # rabort tcp IPv4 10.0.0.1 4420 kernel_target 00:26:58.046 15:07:31 -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:26:58.046 15:07:31 -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:26:58.046 15:07:31 -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.1 00:26:58.046 15:07:31 -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:26:58.046 15:07:31 -- target/abort_qd_sizes.sh@21 -- # local subnqn=kernel_target 00:26:58.046 15:07:31 -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:26:58.046 15:07:31 -- target/abort_qd_sizes.sh@24 -- # local target r 00:26:58.046 15:07:31 -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:26:58.046 15:07:31 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:26:58.046 15:07:31 -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:26:58.046 15:07:31 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:26:58.046 15:07:31 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:26:58.046 15:07:31 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:26:58.046 15:07:31 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1' 00:26:58.046 15:07:31 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:26:58.046 15:07:31 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420' 00:26:58.046 15:07:31 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:26:58.046 15:07:31 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:kernel_target' 00:26:58.046 15:07:31 -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:26:58.046 15:07:31 -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:kernel_target' 00:27:01.332 Initializing NVMe Controllers 00:27:01.333 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: kernel_target 00:27:01.333 Associating TCP (addr:10.0.0.1 subnqn:kernel_target) NSID 1 with lcore 0 00:27:01.333 Initialization complete. Launching workers. 00:27:01.333 NS: TCP (addr:10.0.0.1 subnqn:kernel_target) NSID 1 I/O completed: 35046, failed: 0 00:27:01.333 CTRLR: TCP (addr:10.0.0.1 subnqn:kernel_target) abort submitted 35046, failed to submit 0 00:27:01.333 success 0, unsuccess 35046, failed 0 00:27:01.333 15:07:34 -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:27:01.333 15:07:34 -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:kernel_target' 00:27:04.617 Initializing NVMe Controllers 00:27:04.617 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: kernel_target 00:27:04.617 Associating TCP (addr:10.0.0.1 subnqn:kernel_target) NSID 1 with lcore 0 00:27:04.617 Initialization complete. Launching workers. 00:27:04.617 NS: TCP (addr:10.0.0.1 subnqn:kernel_target) NSID 1 I/O completed: 84581, failed: 0 00:27:04.617 CTRLR: TCP (addr:10.0.0.1 subnqn:kernel_target) abort submitted 36694, failed to submit 47887 00:27:04.617 success 0, unsuccess 36694, failed 0 00:27:04.617 15:07:37 -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:27:04.617 15:07:37 -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:kernel_target' 00:27:07.904 Initializing NVMe Controllers 00:27:07.904 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: kernel_target 00:27:07.904 Associating TCP (addr:10.0.0.1 subnqn:kernel_target) NSID 1 with lcore 0 00:27:07.904 Initialization complete. Launching workers. 00:27:07.904 NS: TCP (addr:10.0.0.1 subnqn:kernel_target) NSID 1 I/O completed: 103129, failed: 0 00:27:07.904 CTRLR: TCP (addr:10.0.0.1 subnqn:kernel_target) abort submitted 25790, failed to submit 77339 00:27:07.904 success 0, unsuccess 25790, failed 0 00:27:07.904 15:07:40 -- target/abort_qd_sizes.sh@70 -- # clean_kernel_target 00:27:07.904 15:07:40 -- nvmf/common.sh@675 -- # [[ -e /sys/kernel/config/nvmet/subsystems/kernel_target ]] 00:27:07.904 15:07:40 -- nvmf/common.sh@677 -- # echo 0 00:27:07.904 15:07:40 -- nvmf/common.sh@679 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/kernel_target 00:27:07.904 15:07:40 -- nvmf/common.sh@680 -- # rmdir /sys/kernel/config/nvmet/subsystems/kernel_target/namespaces/1 00:27:07.904 15:07:40 -- nvmf/common.sh@681 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:27:07.904 15:07:40 -- nvmf/common.sh@682 -- # rmdir /sys/kernel/config/nvmet/subsystems/kernel_target 00:27:07.904 15:07:40 -- nvmf/common.sh@684 -- # modules=(/sys/module/nvmet/holders/*) 00:27:07.904 15:07:40 -- nvmf/common.sh@686 -- # modprobe -r nvmet_tcp nvmet 00:27:07.904 00:27:07.904 real 0m10.441s 00:27:07.904 user 0m5.591s 00:27:07.904 sys 0m2.087s 00:27:07.904 15:07:40 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:27:07.904 15:07:40 -- common/autotest_common.sh@10 -- # set +x 00:27:07.904 ************************************ 00:27:07.904 END TEST kernel_target_abort 00:27:07.904 ************************************ 00:27:07.904 15:07:40 -- target/abort_qd_sizes.sh@86 -- # trap - SIGINT SIGTERM EXIT 00:27:07.904 15:07:40 -- target/abort_qd_sizes.sh@87 -- # nvmftestfini 00:27:07.904 15:07:40 -- nvmf/common.sh@476 -- # nvmfcleanup 00:27:07.904 15:07:40 -- nvmf/common.sh@116 -- # sync 00:27:07.904 15:07:40 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:27:07.904 15:07:40 -- nvmf/common.sh@119 -- # set +e 00:27:07.904 15:07:40 -- nvmf/common.sh@120 -- # for i in {1..20} 00:27:07.904 15:07:40 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:27:07.904 rmmod nvme_tcp 00:27:07.904 rmmod nvme_fabrics 00:27:07.904 rmmod nvme_keyring 00:27:07.904 15:07:40 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:27:07.905 15:07:40 -- nvmf/common.sh@123 -- # set -e 00:27:07.905 15:07:40 -- nvmf/common.sh@124 -- # return 0 00:27:07.905 15:07:40 -- nvmf/common.sh@477 -- # '[' -n 103562 ']' 00:27:07.905 15:07:40 -- nvmf/common.sh@478 -- # killprocess 103562 00:27:07.905 15:07:40 -- common/autotest_common.sh@936 -- # '[' -z 103562 ']' 00:27:07.905 15:07:40 -- common/autotest_common.sh@940 -- # kill -0 103562 00:27:07.905 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 940: kill: (103562) - No such process 00:27:07.905 Process with pid 103562 is not found 00:27:07.905 15:07:40 -- common/autotest_common.sh@963 -- # echo 'Process with pid 103562 is not found' 00:27:07.905 15:07:40 -- nvmf/common.sh@480 -- # '[' iso == iso ']' 00:27:07.905 15:07:40 -- nvmf/common.sh@481 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:27:08.473 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:27:08.732 0000:00:06.0 (1b36 0010): Already using the nvme driver 00:27:08.732 0000:00:07.0 (1b36 0010): Already using the nvme driver 00:27:08.732 15:07:41 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:27:08.732 15:07:41 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:27:08.732 15:07:41 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:27:08.732 15:07:41 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:27:08.732 15:07:41 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:08.732 15:07:41 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:27:08.732 15:07:41 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:08.732 15:07:41 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:27:08.732 00:27:08.732 real 0m24.828s 00:27:08.732 user 0m50.944s 00:27:08.732 sys 0m5.189s 00:27:08.732 15:07:41 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:27:08.732 ************************************ 00:27:08.732 END TEST nvmf_abort_qd_sizes 00:27:08.732 15:07:41 -- common/autotest_common.sh@10 -- # set +x 00:27:08.732 ************************************ 00:27:08.732 15:07:41 -- spdk/autotest.sh@298 -- # '[' 0 -eq 1 ']' 00:27:08.732 15:07:41 -- spdk/autotest.sh@302 -- # '[' 0 -eq 1 ']' 00:27:08.732 15:07:41 -- spdk/autotest.sh@306 -- # '[' 0 -eq 1 ']' 00:27:08.732 15:07:41 -- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']' 00:27:08.732 15:07:41 -- spdk/autotest.sh@320 -- # '[' 0 -eq 1 ']' 00:27:08.732 15:07:41 -- spdk/autotest.sh@325 -- # '[' 0 -eq 1 ']' 00:27:08.732 15:07:41 -- spdk/autotest.sh@329 -- # '[' 0 -eq 1 ']' 00:27:08.732 15:07:41 -- spdk/autotest.sh@333 -- # '[' 0 -eq 1 ']' 00:27:08.732 15:07:41 -- spdk/autotest.sh@337 -- # '[' 0 -eq 1 ']' 00:27:08.732 15:07:41 -- spdk/autotest.sh@342 -- # '[' 0 -eq 1 ']' 00:27:08.732 15:07:41 -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']' 00:27:08.732 15:07:41 -- spdk/autotest.sh@353 -- # [[ 0 -eq 1 ]] 00:27:08.732 15:07:41 -- spdk/autotest.sh@357 -- # [[ 0 -eq 1 ]] 00:27:08.732 15:07:41 -- spdk/autotest.sh@361 -- # [[ 0 -eq 1 ]] 00:27:08.732 15:07:41 -- spdk/autotest.sh@365 -- # [[ 0 -eq 1 ]] 00:27:08.732 15:07:41 -- spdk/autotest.sh@370 -- # trap - SIGINT SIGTERM EXIT 00:27:08.732 15:07:41 -- spdk/autotest.sh@372 -- # timing_enter post_cleanup 00:27:08.732 15:07:41 -- common/autotest_common.sh@722 -- # xtrace_disable 00:27:08.732 15:07:41 -- common/autotest_common.sh@10 -- # set +x 00:27:08.732 15:07:41 -- spdk/autotest.sh@373 -- # autotest_cleanup 00:27:08.732 15:07:41 -- common/autotest_common.sh@1381 -- # local autotest_es=0 00:27:08.732 15:07:41 -- common/autotest_common.sh@1382 -- # xtrace_disable 00:27:08.732 15:07:41 -- common/autotest_common.sh@10 -- # set +x 00:27:10.637 INFO: APP EXITING 00:27:10.637 INFO: killing all VMs 00:27:10.637 INFO: killing vhost app 00:27:10.637 INFO: EXIT DONE 00:27:11.574 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:27:11.574 0000:00:06.0 (1b36 0010): Already using the nvme driver 00:27:11.574 0000:00:07.0 (1b36 0010): Already using the nvme driver 00:27:12.142 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:27:12.142 Cleaning 00:27:12.142 Removing: /var/run/dpdk/spdk0/config 00:27:12.142 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:27:12.142 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:27:12.142 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:27:12.142 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:27:12.142 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:27:12.142 Removing: /var/run/dpdk/spdk0/hugepage_info 00:27:12.142 Removing: /var/run/dpdk/spdk1/config 00:27:12.142 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-0 00:27:12.142 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-1 00:27:12.142 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-2 00:27:12.142 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-3 00:27:12.142 Removing: /var/run/dpdk/spdk1/fbarray_memzone 00:27:12.142 Removing: /var/run/dpdk/spdk1/hugepage_info 00:27:12.142 Removing: /var/run/dpdk/spdk2/config 00:27:12.142 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-0 00:27:12.400 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-1 00:27:12.400 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-2 00:27:12.400 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-3 00:27:12.400 Removing: /var/run/dpdk/spdk2/fbarray_memzone 00:27:12.400 Removing: /var/run/dpdk/spdk2/hugepage_info 00:27:12.400 Removing: /var/run/dpdk/spdk3/config 00:27:12.400 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-0 00:27:12.400 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-1 00:27:12.400 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-2 00:27:12.400 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-3 00:27:12.400 Removing: /var/run/dpdk/spdk3/fbarray_memzone 00:27:12.400 Removing: /var/run/dpdk/spdk3/hugepage_info 00:27:12.400 Removing: /var/run/dpdk/spdk4/config 00:27:12.400 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-0 00:27:12.400 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-1 00:27:12.400 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-2 00:27:12.400 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-3 00:27:12.400 Removing: /var/run/dpdk/spdk4/fbarray_memzone 00:27:12.400 Removing: /var/run/dpdk/spdk4/hugepage_info 00:27:12.400 Removing: /dev/shm/nvmf_trace.0 00:27:12.400 Removing: /dev/shm/spdk_tgt_trace.pid67608 00:27:12.400 Removing: /var/run/dpdk/spdk0 00:27:12.400 Removing: /var/run/dpdk/spdk1 00:27:12.400 Removing: /var/run/dpdk/spdk2 00:27:12.400 Removing: /var/run/dpdk/spdk3 00:27:12.400 Removing: /var/run/dpdk/spdk4 00:27:12.400 Removing: /var/run/dpdk/spdk_pid100493 00:27:12.400 Removing: /var/run/dpdk/spdk_pid100704 00:27:12.400 Removing: /var/run/dpdk/spdk_pid101000 00:27:12.400 Removing: /var/run/dpdk/spdk_pid101320 00:27:12.400 Removing: /var/run/dpdk/spdk_pid101888 00:27:12.400 Removing: /var/run/dpdk/spdk_pid101897 00:27:12.400 Removing: /var/run/dpdk/spdk_pid102266 00:27:12.400 Removing: /var/run/dpdk/spdk_pid102427 00:27:12.400 Removing: /var/run/dpdk/spdk_pid102584 00:27:12.400 Removing: /var/run/dpdk/spdk_pid102683 00:27:12.400 Removing: /var/run/dpdk/spdk_pid102843 00:27:12.400 Removing: /var/run/dpdk/spdk_pid102952 00:27:12.400 Removing: /var/run/dpdk/spdk_pid103637 00:27:12.400 Removing: /var/run/dpdk/spdk_pid103671 00:27:12.400 Removing: /var/run/dpdk/spdk_pid103703 00:27:12.400 Removing: /var/run/dpdk/spdk_pid103953 00:27:12.400 Removing: /var/run/dpdk/spdk_pid103988 00:27:12.400 Removing: /var/run/dpdk/spdk_pid104019 00:27:12.400 Removing: /var/run/dpdk/spdk_pid67456 00:27:12.400 Removing: /var/run/dpdk/spdk_pid67608 00:27:12.400 Removing: /var/run/dpdk/spdk_pid67929 00:27:12.400 Removing: /var/run/dpdk/spdk_pid68204 00:27:12.400 Removing: /var/run/dpdk/spdk_pid68395 00:27:12.400 Removing: /var/run/dpdk/spdk_pid68484 00:27:12.400 Removing: /var/run/dpdk/spdk_pid68583 00:27:12.400 Removing: /var/run/dpdk/spdk_pid68685 00:27:12.400 Removing: /var/run/dpdk/spdk_pid68718 00:27:12.400 Removing: /var/run/dpdk/spdk_pid68748 00:27:12.400 Removing: /var/run/dpdk/spdk_pid68822 00:27:12.400 Removing: /var/run/dpdk/spdk_pid68945 00:27:12.400 Removing: /var/run/dpdk/spdk_pid69571 00:27:12.400 Removing: /var/run/dpdk/spdk_pid69635 00:27:12.400 Removing: /var/run/dpdk/spdk_pid69700 00:27:12.400 Removing: /var/run/dpdk/spdk_pid69727 00:27:12.400 Removing: /var/run/dpdk/spdk_pid69806 00:27:12.400 Removing: /var/run/dpdk/spdk_pid69834 00:27:12.400 Removing: /var/run/dpdk/spdk_pid69913 00:27:12.400 Removing: /var/run/dpdk/spdk_pid69941 00:27:12.400 Removing: /var/run/dpdk/spdk_pid69992 00:27:12.400 Removing: /var/run/dpdk/spdk_pid70021 00:27:12.400 Removing: /var/run/dpdk/spdk_pid70074 00:27:12.400 Removing: /var/run/dpdk/spdk_pid70104 00:27:12.400 Removing: /var/run/dpdk/spdk_pid70252 00:27:12.400 Removing: /var/run/dpdk/spdk_pid70293 00:27:12.400 Removing: /var/run/dpdk/spdk_pid70371 00:27:12.400 Removing: /var/run/dpdk/spdk_pid70446 00:27:12.400 Removing: /var/run/dpdk/spdk_pid70465 00:27:12.400 Removing: /var/run/dpdk/spdk_pid70529 00:27:12.400 Removing: /var/run/dpdk/spdk_pid70543 00:27:12.658 Removing: /var/run/dpdk/spdk_pid70578 00:27:12.658 Removing: /var/run/dpdk/spdk_pid70597 00:27:12.658 Removing: /var/run/dpdk/spdk_pid70626 00:27:12.658 Removing: /var/run/dpdk/spdk_pid70646 00:27:12.658 Removing: /var/run/dpdk/spdk_pid70680 00:27:12.658 Removing: /var/run/dpdk/spdk_pid70694 00:27:12.658 Removing: /var/run/dpdk/spdk_pid70733 00:27:12.658 Removing: /var/run/dpdk/spdk_pid70750 00:27:12.658 Removing: /var/run/dpdk/spdk_pid70785 00:27:12.658 Removing: /var/run/dpdk/spdk_pid70804 00:27:12.658 Removing: /var/run/dpdk/spdk_pid70833 00:27:12.658 Removing: /var/run/dpdk/spdk_pid70853 00:27:12.658 Removing: /var/run/dpdk/spdk_pid70888 00:27:12.658 Removing: /var/run/dpdk/spdk_pid70908 00:27:12.658 Removing: /var/run/dpdk/spdk_pid70937 00:27:12.658 Removing: /var/run/dpdk/spdk_pid70956 00:27:12.658 Removing: /var/run/dpdk/spdk_pid70991 00:27:12.658 Removing: /var/run/dpdk/spdk_pid71005 00:27:12.658 Removing: /var/run/dpdk/spdk_pid71048 00:27:12.658 Removing: /var/run/dpdk/spdk_pid71062 00:27:12.658 Removing: /var/run/dpdk/spdk_pid71095 00:27:12.658 Removing: /var/run/dpdk/spdk_pid71116 00:27:12.658 Removing: /var/run/dpdk/spdk_pid71145 00:27:12.658 Removing: /var/run/dpdk/spdk_pid71170 00:27:12.658 Removing: /var/run/dpdk/spdk_pid71199 00:27:12.658 Removing: /var/run/dpdk/spdk_pid71213 00:27:12.658 Removing: /var/run/dpdk/spdk_pid71253 00:27:12.658 Removing: /var/run/dpdk/spdk_pid71267 00:27:12.658 Removing: /var/run/dpdk/spdk_pid71302 00:27:12.658 Removing: /var/run/dpdk/spdk_pid71321 00:27:12.658 Removing: /var/run/dpdk/spdk_pid71350 00:27:12.658 Removing: /var/run/dpdk/spdk_pid71373 00:27:12.658 Removing: /var/run/dpdk/spdk_pid71410 00:27:12.658 Removing: /var/run/dpdk/spdk_pid71433 00:27:12.658 Removing: /var/run/dpdk/spdk_pid71470 00:27:12.658 Removing: /var/run/dpdk/spdk_pid71484 00:27:12.658 Removing: /var/run/dpdk/spdk_pid71519 00:27:12.658 Removing: /var/run/dpdk/spdk_pid71538 00:27:12.658 Removing: /var/run/dpdk/spdk_pid71574 00:27:12.658 Removing: /var/run/dpdk/spdk_pid71645 00:27:12.658 Removing: /var/run/dpdk/spdk_pid71750 00:27:12.658 Removing: /var/run/dpdk/spdk_pid72185 00:27:12.658 Removing: /var/run/dpdk/spdk_pid79137 00:27:12.658 Removing: /var/run/dpdk/spdk_pid79486 00:27:12.658 Removing: /var/run/dpdk/spdk_pid81913 00:27:12.658 Removing: /var/run/dpdk/spdk_pid82306 00:27:12.658 Removing: /var/run/dpdk/spdk_pid82565 00:27:12.658 Removing: /var/run/dpdk/spdk_pid82617 00:27:12.658 Removing: /var/run/dpdk/spdk_pid82932 00:27:12.658 Removing: /var/run/dpdk/spdk_pid82982 00:27:12.658 Removing: /var/run/dpdk/spdk_pid83360 00:27:12.658 Removing: /var/run/dpdk/spdk_pid83902 00:27:12.658 Removing: /var/run/dpdk/spdk_pid84333 00:27:12.658 Removing: /var/run/dpdk/spdk_pid85276 00:27:12.658 Removing: /var/run/dpdk/spdk_pid86270 00:27:12.658 Removing: /var/run/dpdk/spdk_pid86387 00:27:12.658 Removing: /var/run/dpdk/spdk_pid86456 00:27:12.658 Removing: /var/run/dpdk/spdk_pid87936 00:27:12.658 Removing: /var/run/dpdk/spdk_pid88191 00:27:12.658 Removing: /var/run/dpdk/spdk_pid88642 00:27:12.658 Removing: /var/run/dpdk/spdk_pid88754 00:27:12.658 Removing: /var/run/dpdk/spdk_pid88908 00:27:12.658 Removing: /var/run/dpdk/spdk_pid88949 00:27:12.658 Removing: /var/run/dpdk/spdk_pid88989 00:27:12.658 Removing: /var/run/dpdk/spdk_pid89040 00:27:12.658 Removing: /var/run/dpdk/spdk_pid89198 00:27:12.658 Removing: /var/run/dpdk/spdk_pid89350 00:27:12.658 Removing: /var/run/dpdk/spdk_pid89615 00:27:12.658 Removing: /var/run/dpdk/spdk_pid89738 00:27:12.658 Removing: /var/run/dpdk/spdk_pid90153 00:27:12.658 Removing: /var/run/dpdk/spdk_pid90540 00:27:12.658 Removing: /var/run/dpdk/spdk_pid90543 00:27:12.658 Removing: /var/run/dpdk/spdk_pid92810 00:27:12.658 Removing: /var/run/dpdk/spdk_pid93122 00:27:12.658 Removing: /var/run/dpdk/spdk_pid93637 00:27:12.658 Removing: /var/run/dpdk/spdk_pid93640 00:27:12.658 Removing: /var/run/dpdk/spdk_pid93988 00:27:12.658 Removing: /var/run/dpdk/spdk_pid94011 00:27:12.658 Removing: /var/run/dpdk/spdk_pid94027 00:27:12.658 Removing: /var/run/dpdk/spdk_pid94052 00:27:12.658 Removing: /var/run/dpdk/spdk_pid94065 00:27:12.916 Removing: /var/run/dpdk/spdk_pid94204 00:27:12.916 Removing: /var/run/dpdk/spdk_pid94211 00:27:12.916 Removing: /var/run/dpdk/spdk_pid94314 00:27:12.916 Removing: /var/run/dpdk/spdk_pid94316 00:27:12.916 Removing: /var/run/dpdk/spdk_pid94430 00:27:12.916 Removing: /var/run/dpdk/spdk_pid94432 00:27:12.916 Removing: /var/run/dpdk/spdk_pid94907 00:27:12.916 Removing: /var/run/dpdk/spdk_pid94952 00:27:12.916 Removing: /var/run/dpdk/spdk_pid95110 00:27:12.916 Removing: /var/run/dpdk/spdk_pid95227 00:27:12.916 Removing: /var/run/dpdk/spdk_pid95629 00:27:12.916 Removing: /var/run/dpdk/spdk_pid95886 00:27:12.916 Removing: /var/run/dpdk/spdk_pid96387 00:27:12.916 Removing: /var/run/dpdk/spdk_pid96947 00:27:12.916 Removing: /var/run/dpdk/spdk_pid97404 00:27:12.916 Removing: /var/run/dpdk/spdk_pid97500 00:27:12.916 Removing: /var/run/dpdk/spdk_pid97571 00:27:12.916 Removing: /var/run/dpdk/spdk_pid97661 00:27:12.916 Removing: /var/run/dpdk/spdk_pid97806 00:27:12.916 Removing: /var/run/dpdk/spdk_pid97892 00:27:12.916 Removing: /var/run/dpdk/spdk_pid97981 00:27:12.916 Removing: /var/run/dpdk/spdk_pid98073 00:27:12.916 Removing: /var/run/dpdk/spdk_pid98424 00:27:12.916 Removing: /var/run/dpdk/spdk_pid99137 00:27:12.916 Clean 00:27:12.916 killing process with pid 61846 00:27:12.916 killing process with pid 61847 00:27:12.916 15:07:46 -- common/autotest_common.sh@1446 -- # return 0 00:27:12.916 15:07:46 -- spdk/autotest.sh@374 -- # timing_exit post_cleanup 00:27:12.916 15:07:46 -- common/autotest_common.sh@728 -- # xtrace_disable 00:27:12.916 15:07:46 -- common/autotest_common.sh@10 -- # set +x 00:27:13.174 15:07:46 -- spdk/autotest.sh@376 -- # timing_exit autotest 00:27:13.175 15:07:46 -- common/autotest_common.sh@728 -- # xtrace_disable 00:27:13.175 15:07:46 -- common/autotest_common.sh@10 -- # set +x 00:27:13.175 15:07:46 -- spdk/autotest.sh@377 -- # chmod a+r /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:27:13.175 15:07:46 -- spdk/autotest.sh@379 -- # [[ -f /home/vagrant/spdk_repo/spdk/../output/udev.log ]] 00:27:13.175 15:07:46 -- spdk/autotest.sh@379 -- # rm -f /home/vagrant/spdk_repo/spdk/../output/udev.log 00:27:13.175 15:07:46 -- spdk/autotest.sh@381 -- # [[ y == y ]] 00:27:13.175 15:07:46 -- spdk/autotest.sh@383 -- # hostname 00:27:13.175 15:07:46 -- spdk/autotest.sh@383 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -d /home/vagrant/spdk_repo/spdk -t fedora39-cloud-1721788873-2326 -o /home/vagrant/spdk_repo/spdk/../output/cov_test.info 00:27:13.433 geninfo: WARNING: invalid characters removed from testname! 00:27:35.365 15:08:06 -- spdk/autotest.sh@384 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -a /home/vagrant/spdk_repo/spdk/../output/cov_base.info -a /home/vagrant/spdk_repo/spdk/../output/cov_test.info -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:27:36.404 15:08:09 -- spdk/autotest.sh@385 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/dpdk/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:27:38.939 15:08:11 -- spdk/autotest.sh@389 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info --ignore-errors unused,unused '/usr/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:27:40.841 15:08:13 -- spdk/autotest.sh@390 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/examples/vmd/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:27:42.744 15:08:15 -- spdk/autotest.sh@391 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:27:45.278 15:08:17 -- spdk/autotest.sh@392 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:27:47.179 15:08:19 -- spdk/autotest.sh@393 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:27:47.179 15:08:19 -- common/autotest_common.sh@1689 -- $ [[ y == y ]] 00:27:47.179 15:08:19 -- common/autotest_common.sh@1690 -- $ lcov --version 00:27:47.179 15:08:19 -- common/autotest_common.sh@1690 -- $ awk '{print $NF}' 00:27:47.179 15:08:20 -- common/autotest_common.sh@1690 -- $ lt 1.15 2 00:27:47.179 15:08:20 -- scripts/common.sh@372 -- $ cmp_versions 1.15 '<' 2 00:27:47.179 15:08:20 -- scripts/common.sh@332 -- $ local ver1 ver1_l 00:27:47.179 15:08:20 -- scripts/common.sh@333 -- $ local ver2 ver2_l 00:27:47.179 15:08:20 -- scripts/common.sh@335 -- $ IFS=.-: 00:27:47.179 15:08:20 -- scripts/common.sh@335 -- $ read -ra ver1 00:27:47.179 15:08:20 -- scripts/common.sh@336 -- $ IFS=.-: 00:27:47.179 15:08:20 -- scripts/common.sh@336 -- $ read -ra ver2 00:27:47.179 15:08:20 -- scripts/common.sh@337 -- $ local 'op=<' 00:27:47.179 15:08:20 -- scripts/common.sh@339 -- $ ver1_l=2 00:27:47.179 15:08:20 -- scripts/common.sh@340 -- $ ver2_l=1 00:27:47.179 15:08:20 -- scripts/common.sh@342 -- $ local lt=0 gt=0 eq=0 v 00:27:47.179 15:08:20 -- scripts/common.sh@343 -- $ case "$op" in 00:27:47.179 15:08:20 -- scripts/common.sh@344 -- $ : 1 00:27:47.179 15:08:20 -- scripts/common.sh@363 -- $ (( v = 0 )) 00:27:47.179 15:08:20 -- scripts/common.sh@363 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:27:47.179 15:08:20 -- scripts/common.sh@364 -- $ decimal 1 00:27:47.179 15:08:20 -- scripts/common.sh@352 -- $ local d=1 00:27:47.179 15:08:20 -- scripts/common.sh@353 -- $ [[ 1 =~ ^[0-9]+$ ]] 00:27:47.179 15:08:20 -- scripts/common.sh@354 -- $ echo 1 00:27:47.179 15:08:20 -- scripts/common.sh@364 -- $ ver1[v]=1 00:27:47.179 15:08:20 -- scripts/common.sh@365 -- $ decimal 2 00:27:47.179 15:08:20 -- scripts/common.sh@352 -- $ local d=2 00:27:47.179 15:08:20 -- scripts/common.sh@353 -- $ [[ 2 =~ ^[0-9]+$ ]] 00:27:47.179 15:08:20 -- scripts/common.sh@354 -- $ echo 2 00:27:47.179 15:08:20 -- scripts/common.sh@365 -- $ ver2[v]=2 00:27:47.179 15:08:20 -- scripts/common.sh@366 -- $ (( ver1[v] > ver2[v] )) 00:27:47.179 15:08:20 -- scripts/common.sh@367 -- $ (( ver1[v] < ver2[v] )) 00:27:47.179 15:08:20 -- scripts/common.sh@367 -- $ return 0 00:27:47.179 15:08:20 -- common/autotest_common.sh@1691 -- $ lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:27:47.179 15:08:20 -- common/autotest_common.sh@1703 -- $ export 'LCOV_OPTS= 00:27:47.179 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:47.179 --rc genhtml_branch_coverage=1 00:27:47.179 --rc genhtml_function_coverage=1 00:27:47.179 --rc genhtml_legend=1 00:27:47.179 --rc geninfo_all_blocks=1 00:27:47.180 --rc geninfo_unexecuted_blocks=1 00:27:47.180 00:27:47.180 ' 00:27:47.180 15:08:20 -- common/autotest_common.sh@1703 -- $ LCOV_OPTS=' 00:27:47.180 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:47.180 --rc genhtml_branch_coverage=1 00:27:47.180 --rc genhtml_function_coverage=1 00:27:47.180 --rc genhtml_legend=1 00:27:47.180 --rc geninfo_all_blocks=1 00:27:47.180 --rc geninfo_unexecuted_blocks=1 00:27:47.180 00:27:47.180 ' 00:27:47.180 15:08:20 -- common/autotest_common.sh@1704 -- $ export 'LCOV=lcov 00:27:47.180 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:47.180 --rc genhtml_branch_coverage=1 00:27:47.180 --rc genhtml_function_coverage=1 00:27:47.180 --rc genhtml_legend=1 00:27:47.180 --rc geninfo_all_blocks=1 00:27:47.180 --rc geninfo_unexecuted_blocks=1 00:27:47.180 00:27:47.180 ' 00:27:47.180 15:08:20 -- common/autotest_common.sh@1704 -- $ LCOV='lcov 00:27:47.180 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:47.180 --rc genhtml_branch_coverage=1 00:27:47.180 --rc genhtml_function_coverage=1 00:27:47.180 --rc genhtml_legend=1 00:27:47.180 --rc geninfo_all_blocks=1 00:27:47.180 --rc geninfo_unexecuted_blocks=1 00:27:47.180 00:27:47.180 ' 00:27:47.180 15:08:20 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:27:47.180 15:08:20 -- scripts/common.sh@433 -- $ [[ -e /bin/wpdk_common.sh ]] 00:27:47.180 15:08:20 -- scripts/common.sh@441 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:47.180 15:08:20 -- scripts/common.sh@442 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:47.180 15:08:20 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:47.180 15:08:20 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:47.180 15:08:20 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:47.180 15:08:20 -- paths/export.sh@5 -- $ export PATH 00:27:47.180 15:08:20 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:47.180 15:08:20 -- common/autobuild_common.sh@439 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:27:47.180 15:08:20 -- common/autobuild_common.sh@440 -- $ date +%s 00:27:47.180 15:08:20 -- common/autobuild_common.sh@440 -- $ mktemp -dt spdk_1733065700.XXXXXX 00:27:47.180 15:08:20 -- common/autobuild_common.sh@440 -- $ SPDK_WORKSPACE=/tmp/spdk_1733065700.FuhgDn 00:27:47.180 15:08:20 -- common/autobuild_common.sh@442 -- $ [[ -n '' ]] 00:27:47.180 15:08:20 -- common/autobuild_common.sh@446 -- $ '[' -n v23.11 ']' 00:27:47.180 15:08:20 -- common/autobuild_common.sh@447 -- $ dirname /home/vagrant/spdk_repo/dpdk/build 00:27:47.180 15:08:20 -- common/autobuild_common.sh@447 -- $ scanbuild_exclude=' --exclude /home/vagrant/spdk_repo/dpdk' 00:27:47.180 15:08:20 -- common/autobuild_common.sh@453 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:27:47.180 15:08:20 -- common/autobuild_common.sh@455 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/dpdk --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:27:47.180 15:08:20 -- common/autobuild_common.sh@456 -- $ get_config_params 00:27:47.180 15:08:20 -- common/autotest_common.sh@397 -- $ xtrace_disable 00:27:47.180 15:08:20 -- common/autotest_common.sh@10 -- $ set +x 00:27:47.180 15:08:20 -- common/autobuild_common.sh@456 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-usdt --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-dpdk=/home/vagrant/spdk_repo/dpdk/build --with-avahi --with-golang' 00:27:47.180 15:08:20 -- spdk/autopackage.sh@10 -- $ MAKEFLAGS=-j10 00:27:47.180 15:08:20 -- spdk/autopackage.sh@11 -- $ cd /home/vagrant/spdk_repo/spdk 00:27:47.180 15:08:20 -- spdk/autopackage.sh@13 -- $ [[ 0 -eq 1 ]] 00:27:47.180 15:08:20 -- spdk/autopackage.sh@18 -- $ [[ 1 -eq 0 ]] 00:27:47.180 15:08:20 -- spdk/autopackage.sh@18 -- $ [[ 0 -eq 0 ]] 00:27:47.180 15:08:20 -- spdk/autopackage.sh@19 -- $ timing_finish 00:27:47.180 15:08:20 -- common/autotest_common.sh@734 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:27:47.180 15:08:20 -- common/autotest_common.sh@735 -- $ '[' -x /usr/local/FlameGraph/flamegraph.pl ']' 00:27:47.180 15:08:20 -- common/autotest_common.sh@737 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:27:47.180 15:08:20 -- spdk/autopackage.sh@20 -- $ exit 0 00:27:47.180 + [[ -n 5963 ]] 00:27:47.180 + sudo kill 5963 00:27:47.189 [Pipeline] } 00:27:47.205 [Pipeline] // timeout 00:27:47.211 [Pipeline] } 00:27:47.227 [Pipeline] // stage 00:27:47.232 [Pipeline] } 00:27:47.248 [Pipeline] // catchError 00:27:47.257 [Pipeline] stage 00:27:47.260 [Pipeline] { (Stop VM) 00:27:47.273 [Pipeline] sh 00:27:47.555 + vagrant halt 00:27:50.842 ==> default: Halting domain... 00:27:57.420 [Pipeline] sh 00:27:57.700 + vagrant destroy -f 00:28:00.234 ==> default: Removing domain... 00:28:00.505 [Pipeline] sh 00:28:00.786 + mv output /var/jenkins/workspace/nvmf-tcp-vg-autotest/output 00:28:00.795 [Pipeline] } 00:28:00.810 [Pipeline] // stage 00:28:00.816 [Pipeline] } 00:28:00.830 [Pipeline] // dir 00:28:00.836 [Pipeline] } 00:28:00.851 [Pipeline] // wrap 00:28:00.858 [Pipeline] } 00:28:00.870 [Pipeline] // catchError 00:28:00.880 [Pipeline] stage 00:28:00.883 [Pipeline] { (Epilogue) 00:28:00.897 [Pipeline] sh 00:28:01.179 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:28:05.378 [Pipeline] catchError 00:28:05.380 [Pipeline] { 00:28:05.393 [Pipeline] sh 00:28:05.675 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:28:05.933 Artifacts sizes are good 00:28:05.943 [Pipeline] } 00:28:05.959 [Pipeline] // catchError 00:28:05.970 [Pipeline] archiveArtifacts 00:28:05.978 Archiving artifacts 00:28:06.132 [Pipeline] cleanWs 00:28:06.145 [WS-CLEANUP] Deleting project workspace... 00:28:06.145 [WS-CLEANUP] Deferred wipeout is used... 00:28:06.169 [WS-CLEANUP] done 00:28:06.172 [Pipeline] } 00:28:06.188 [Pipeline] // stage 00:28:06.192 [Pipeline] } 00:28:06.205 [Pipeline] // node 00:28:06.211 [Pipeline] End of Pipeline 00:28:06.245 Finished: SUCCESS